00:00:00.000 Started by upstream project "autotest-per-patch" build number 121029 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.008 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.009 The recommended git tool is: git 00:00:00.009 using credential 00000000-0000-0000-0000-000000000002 00:00:00.011 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.086 Fetching changes from the remote Git repository 00:00:00.088 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.135 Using shallow fetch with depth 1 00:00:00.135 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.135 > git --version # timeout=10 00:00:00.166 > git --version # 'git version 2.39.2' 00:00:00.167 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.167 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.167 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.309 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.321 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.332 Checking out Revision 6e1fadd1eee50389429f9abb33dde5face8ca717 (FETCH_HEAD) 00:00:05.332 > git config core.sparsecheckout # timeout=10 00:00:05.342 > git read-tree -mu HEAD # timeout=10 00:00:05.358 > git checkout -f 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=5 00:00:05.379 Commit message: "pool: attach build logs for failed merge builds" 00:00:05.380 > git rev-list --no-walk 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=10 00:00:05.496 [Pipeline] Start of Pipeline 00:00:05.506 [Pipeline] library 00:00:05.507 Loading library shm_lib@master 00:00:05.508 Library shm_lib@master is cached. Copying from home. 00:00:05.522 [Pipeline] node 00:00:05.532 Running on WFP22 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.533 [Pipeline] { 00:00:05.543 [Pipeline] catchError 00:00:05.544 [Pipeline] { 00:00:05.559 [Pipeline] wrap 00:00:05.567 [Pipeline] { 00:00:05.571 [Pipeline] stage 00:00:05.572 [Pipeline] { (Prologue) 00:00:05.720 [Pipeline] sh 00:00:05.999 + logger -p user.info -t JENKINS-CI 00:00:06.013 [Pipeline] echo 00:00:06.014 Node: WFP22 00:00:06.020 [Pipeline] sh 00:00:06.313 [Pipeline] setCustomBuildProperty 00:00:06.321 [Pipeline] echo 00:00:06.322 Cleanup processes 00:00:06.326 [Pipeline] sh 00:00:06.605 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.605 2559411 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.620 [Pipeline] sh 00:00:06.905 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.905 ++ grep -v 'sudo pgrep' 00:00:06.905 ++ awk '{print $1}' 00:00:06.905 + sudo kill -9 00:00:06.905 + true 00:00:06.918 [Pipeline] cleanWs 00:00:06.928 [WS-CLEANUP] Deleting project workspace... 00:00:06.928 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.935 [WS-CLEANUP] done 00:00:06.939 [Pipeline] setCustomBuildProperty 00:00:06.951 [Pipeline] sh 00:00:07.227 + sudo git config --global --replace-all safe.directory '*' 00:00:07.285 [Pipeline] nodesByLabel 00:00:07.286 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.294 [Pipeline] httpRequest 00:00:07.299 HttpMethod: GET 00:00:07.299 URL: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:07.302 Sending request to url: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:07.306 Response Code: HTTP/1.1 200 OK 00:00:07.306 Success: Status code 200 is in the accepted range: 200,404 00:00:07.307 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:07.576 [Pipeline] sh 00:00:07.870 + tar --no-same-owner -xf jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:07.888 [Pipeline] httpRequest 00:00:07.893 HttpMethod: GET 00:00:07.893 URL: http://10.211.164.96/packages/spdk_7aadd67597ed72ec8b0f25009a1f066253441227.tar.gz 00:00:07.897 Sending request to url: http://10.211.164.96/packages/spdk_7aadd67597ed72ec8b0f25009a1f066253441227.tar.gz 00:00:07.900 Response Code: HTTP/1.1 200 OK 00:00:07.900 Success: Status code 200 is in the accepted range: 200,404 00:00:07.901 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_7aadd67597ed72ec8b0f25009a1f066253441227.tar.gz 00:00:27.043 [Pipeline] sh 00:00:27.326 + tar --no-same-owner -xf spdk_7aadd67597ed72ec8b0f25009a1f066253441227.tar.gz 00:00:29.876 [Pipeline] sh 00:00:30.156 + git -C spdk log --oneline -n5 00:00:30.156 7aadd6759 app/trace: emit owner descriptions 00:00:30.156 bcad2741e trace: rename trace_event's poller_id to owner_id 00:00:30.156 85741177a trace: add concept of "owner" to trace files 00:00:30.156 bf2cbb6d8 trace: rename "per_lcore_history" to just "data" 00:00:30.156 035bc63a4 trace: add trace_flags_fini() 00:00:30.169 [Pipeline] } 00:00:30.186 [Pipeline] // stage 00:00:30.195 [Pipeline] stage 00:00:30.197 [Pipeline] { (Prepare) 00:00:30.216 [Pipeline] writeFile 00:00:30.234 [Pipeline] sh 00:00:30.518 + logger -p user.info -t JENKINS-CI 00:00:30.532 [Pipeline] sh 00:00:30.816 + logger -p user.info -t JENKINS-CI 00:00:30.829 [Pipeline] sh 00:00:31.114 + cat autorun-spdk.conf 00:00:31.114 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.114 SPDK_TEST_NVMF=1 00:00:31.114 SPDK_TEST_NVME_CLI=1 00:00:31.114 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.114 SPDK_TEST_NVMF_NICS=e810 00:00:31.114 SPDK_TEST_VFIOUSER=1 00:00:31.114 SPDK_RUN_UBSAN=1 00:00:31.114 NET_TYPE=phy 00:00:31.122 RUN_NIGHTLY=0 00:00:31.127 [Pipeline] readFile 00:00:31.152 [Pipeline] withEnv 00:00:31.154 [Pipeline] { 00:00:31.169 [Pipeline] sh 00:00:31.455 + set -ex 00:00:31.455 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:31.455 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:31.455 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.455 ++ SPDK_TEST_NVMF=1 00:00:31.455 ++ SPDK_TEST_NVME_CLI=1 00:00:31.455 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.455 ++ SPDK_TEST_NVMF_NICS=e810 00:00:31.455 ++ SPDK_TEST_VFIOUSER=1 00:00:31.455 ++ SPDK_RUN_UBSAN=1 00:00:31.455 ++ NET_TYPE=phy 00:00:31.455 ++ RUN_NIGHTLY=0 00:00:31.455 + case $SPDK_TEST_NVMF_NICS in 00:00:31.455 + DRIVERS=ice 00:00:31.455 + [[ tcp == \r\d\m\a ]] 00:00:31.455 + [[ -n ice ]] 00:00:31.455 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:31.455 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:31.455 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:31.455 rmmod: ERROR: Module irdma is not currently loaded 00:00:31.456 rmmod: ERROR: Module i40iw is not currently loaded 00:00:31.456 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:31.456 + true 00:00:31.456 + for D in $DRIVERS 00:00:31.456 + sudo modprobe ice 00:00:31.456 + exit 0 00:00:31.465 [Pipeline] } 00:00:31.483 [Pipeline] // withEnv 00:00:31.489 [Pipeline] } 00:00:31.510 [Pipeline] // stage 00:00:31.520 [Pipeline] catchError 00:00:31.522 [Pipeline] { 00:00:31.537 [Pipeline] timeout 00:00:31.537 Timeout set to expire in 40 min 00:00:31.539 [Pipeline] { 00:00:31.555 [Pipeline] stage 00:00:31.558 [Pipeline] { (Tests) 00:00:31.574 [Pipeline] sh 00:00:31.859 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:31.859 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:31.859 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:31.859 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:31.859 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:31.859 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:31.859 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:31.859 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:31.859 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:31.859 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:31.859 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:31.859 + source /etc/os-release 00:00:31.859 ++ NAME='Fedora Linux' 00:00:31.859 ++ VERSION='38 (Cloud Edition)' 00:00:31.859 ++ ID=fedora 00:00:31.859 ++ VERSION_ID=38 00:00:31.859 ++ VERSION_CODENAME= 00:00:31.859 ++ PLATFORM_ID=platform:f38 00:00:31.859 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:31.859 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:31.859 ++ LOGO=fedora-logo-icon 00:00:31.859 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:31.859 ++ HOME_URL=https://fedoraproject.org/ 00:00:31.859 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:31.859 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:31.859 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:31.859 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:31.859 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:31.859 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:31.859 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:31.859 ++ SUPPORT_END=2024-05-14 00:00:31.859 ++ VARIANT='Cloud Edition' 00:00:31.859 ++ VARIANT_ID=cloud 00:00:31.859 + uname -a 00:00:31.859 Linux spdk-wfp-22 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:31.859 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:35.153 Hugepages 00:00:35.153 node hugesize free / total 00:00:35.153 node0 1048576kB 0 / 0 00:00:35.153 node0 2048kB 0 / 0 00:00:35.153 node1 1048576kB 0 / 0 00:00:35.153 node1 2048kB 0 / 0 00:00:35.153 00:00:35.153 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:35.153 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:35.153 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:35.153 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:35.153 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:35.153 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:35.153 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:35.153 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:35.153 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:35.153 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:35.153 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:35.153 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:35.153 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:35.153 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:35.153 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:35.153 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:35.153 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:35.153 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:35.153 + rm -f /tmp/spdk-ld-path 00:00:35.153 + source autorun-spdk.conf 00:00:35.153 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.153 ++ SPDK_TEST_NVMF=1 00:00:35.153 ++ SPDK_TEST_NVME_CLI=1 00:00:35.153 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:35.153 ++ SPDK_TEST_NVMF_NICS=e810 00:00:35.153 ++ SPDK_TEST_VFIOUSER=1 00:00:35.153 ++ SPDK_RUN_UBSAN=1 00:00:35.153 ++ NET_TYPE=phy 00:00:35.153 ++ RUN_NIGHTLY=0 00:00:35.153 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:35.153 + [[ -n '' ]] 00:00:35.153 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:35.153 + for M in /var/spdk/build-*-manifest.txt 00:00:35.153 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:35.153 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:35.153 + for M in /var/spdk/build-*-manifest.txt 00:00:35.153 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:35.153 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:35.153 ++ uname 00:00:35.153 + [[ Linux == \L\i\n\u\x ]] 00:00:35.153 + sudo dmesg -T 00:00:35.153 + sudo dmesg --clear 00:00:35.153 + dmesg_pid=2560302 00:00:35.153 + [[ Fedora Linux == FreeBSD ]] 00:00:35.153 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:35.153 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:35.153 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:35.153 + [[ -x /usr/src/fio-static/fio ]] 00:00:35.153 + export FIO_BIN=/usr/src/fio-static/fio 00:00:35.153 + sudo dmesg -Tw 00:00:35.153 + FIO_BIN=/usr/src/fio-static/fio 00:00:35.153 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:35.153 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:35.153 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:35.153 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:35.153 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:35.153 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:35.153 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:35.153 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:35.153 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:35.153 Test configuration: 00:00:35.153 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.153 SPDK_TEST_NVMF=1 00:00:35.153 SPDK_TEST_NVME_CLI=1 00:00:35.153 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:35.153 SPDK_TEST_NVMF_NICS=e810 00:00:35.153 SPDK_TEST_VFIOUSER=1 00:00:35.153 SPDK_RUN_UBSAN=1 00:00:35.153 NET_TYPE=phy 00:00:35.153 RUN_NIGHTLY=0 21:15:57 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:35.153 21:15:57 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:35.153 21:15:57 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:35.153 21:15:57 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:35.154 21:15:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:35.154 21:15:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:35.154 21:15:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:35.154 21:15:57 -- paths/export.sh@5 -- $ export PATH 00:00:35.154 21:15:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:35.154 21:15:57 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:35.154 21:15:57 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:35.154 21:15:57 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713986157.XXXXXX 00:00:35.154 21:15:57 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713986157.5yLPQM 00:00:35.154 21:15:57 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:35.154 21:15:57 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:35.154 21:15:57 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:35.154 21:15:57 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:35.154 21:15:57 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:35.154 21:15:57 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:35.154 21:15:57 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:35.154 21:15:57 -- common/autotest_common.sh@10 -- $ set +x 00:00:35.154 21:15:57 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:35.154 21:15:57 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:35.154 21:15:57 -- pm/common@17 -- $ local monitor 00:00:35.154 21:15:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:35.154 21:15:57 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2560336 00:00:35.154 21:15:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:35.154 21:15:57 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2560338 00:00:35.154 21:15:57 -- pm/common@21 -- $ date +%s 00:00:35.154 21:15:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:35.154 21:15:57 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2560340 00:00:35.154 21:15:57 -- pm/common@21 -- $ date +%s 00:00:35.154 21:15:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:35.154 21:15:57 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2560343 00:00:35.154 21:15:57 -- pm/common@26 -- $ sleep 1 00:00:35.154 21:15:57 -- pm/common@21 -- $ date +%s 00:00:35.154 21:15:57 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713986157 00:00:35.154 21:15:57 -- pm/common@21 -- $ date +%s 00:00:35.154 21:15:57 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713986157 00:00:35.154 21:15:57 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713986157 00:00:35.154 21:15:57 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713986157 00:00:35.154 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713986157_collect-vmstat.pm.log 00:00:35.154 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713986157_collect-cpu-load.pm.log 00:00:35.154 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713986157_collect-bmc-pm.bmc.pm.log 00:00:35.154 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713986157_collect-cpu-temp.pm.log 00:00:36.093 21:15:58 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:36.093 21:15:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:36.093 21:15:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:36.093 21:15:58 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:36.093 21:15:58 -- spdk/autobuild.sh@16 -- $ date -u 00:00:36.093 Wed Apr 24 07:15:58 PM UTC 2024 00:00:36.093 21:15:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:36.093 v24.05-pre-442-g7aadd6759 00:00:36.093 21:15:58 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:36.093 21:15:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:36.093 21:15:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:36.093 21:15:58 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:36.093 21:15:58 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:36.093 21:15:58 -- common/autotest_common.sh@10 -- $ set +x 00:00:36.353 ************************************ 00:00:36.353 START TEST ubsan 00:00:36.353 ************************************ 00:00:36.353 21:15:59 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:36.353 using ubsan 00:00:36.353 00:00:36.353 real 0m0.000s 00:00:36.353 user 0m0.000s 00:00:36.353 sys 0m0.000s 00:00:36.353 21:15:59 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:36.353 21:15:59 -- common/autotest_common.sh@10 -- $ set +x 00:00:36.353 ************************************ 00:00:36.353 END TEST ubsan 00:00:36.353 ************************************ 00:00:36.353 21:15:59 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:36.353 21:15:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:36.353 21:15:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:36.353 21:15:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:36.353 21:15:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:36.353 21:15:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:36.353 21:15:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:36.353 21:15:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:36.353 21:15:59 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:36.353 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:36.353 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:36.945 Using 'verbs' RDMA provider 00:00:49.932 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:02.149 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:02.719 Creating mk/config.mk...done. 00:01:02.719 Creating mk/cc.flags.mk...done. 00:01:02.719 Type 'make' to build. 00:01:02.719 21:16:25 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:02.719 21:16:25 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:02.719 21:16:25 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:02.719 21:16:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.719 ************************************ 00:01:02.719 START TEST make 00:01:02.719 ************************************ 00:01:02.719 21:16:25 -- common/autotest_common.sh@1111 -- $ make -j112 00:01:03.288 make[1]: Nothing to be done for 'all'. 00:01:04.666 The Meson build system 00:01:04.666 Version: 1.3.1 00:01:04.666 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:04.666 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:04.666 Build type: native build 00:01:04.666 Project name: libvfio-user 00:01:04.666 Project version: 0.0.1 00:01:04.666 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:04.666 C linker for the host machine: cc ld.bfd 2.39-16 00:01:04.666 Host machine cpu family: x86_64 00:01:04.666 Host machine cpu: x86_64 00:01:04.666 Run-time dependency threads found: YES 00:01:04.666 Library dl found: YES 00:01:04.667 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:04.667 Run-time dependency json-c found: YES 0.17 00:01:04.667 Run-time dependency cmocka found: YES 1.1.7 00:01:04.667 Program pytest-3 found: NO 00:01:04.667 Program flake8 found: NO 00:01:04.667 Program misspell-fixer found: NO 00:01:04.667 Program restructuredtext-lint found: NO 00:01:04.667 Program valgrind found: YES (/usr/bin/valgrind) 00:01:04.667 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:04.667 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:04.667 Compiler for C supports arguments -Wwrite-strings: YES 00:01:04.667 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:04.667 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:04.667 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:04.667 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:04.667 Build targets in project: 8 00:01:04.667 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:04.667 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:04.667 00:01:04.667 libvfio-user 0.0.1 00:01:04.667 00:01:04.667 User defined options 00:01:04.667 buildtype : debug 00:01:04.667 default_library: shared 00:01:04.667 libdir : /usr/local/lib 00:01:04.667 00:01:04.667 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:04.925 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:04.925 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:04.925 [2/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:04.925 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:04.925 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:04.925 [5/37] Compiling C object samples/null.p/null.c.o 00:01:04.925 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:04.925 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:04.925 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:04.925 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:04.925 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:04.925 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:04.925 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:04.926 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:04.926 [14/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:04.926 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:04.926 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:04.926 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:04.926 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:04.926 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:04.926 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:04.926 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:04.926 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:04.926 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:04.926 [24/37] Compiling C object samples/server.p/server.c.o 00:01:04.926 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:05.184 [26/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:05.184 [27/37] Compiling C object samples/client.p/client.c.o 00:01:05.184 [28/37] Linking target test/unit_tests 00:01:05.184 [29/37] Linking target samples/client 00:01:05.184 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:05.184 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:05.443 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:05.443 [33/37] Linking target samples/gpio-pci-idio-16 00:01:05.443 [34/37] Linking target samples/server 00:01:05.443 [35/37] Linking target samples/lspci 00:01:05.443 [36/37] Linking target samples/null 00:01:05.443 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:05.443 INFO: autodetecting backend as ninja 00:01:05.443 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:05.443 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:05.702 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:05.702 ninja: no work to do. 00:01:10.979 The Meson build system 00:01:10.979 Version: 1.3.1 00:01:10.979 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:10.979 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:10.979 Build type: native build 00:01:10.979 Program cat found: YES (/usr/bin/cat) 00:01:10.979 Project name: DPDK 00:01:10.979 Project version: 23.11.0 00:01:10.979 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:10.979 C linker for the host machine: cc ld.bfd 2.39-16 00:01:10.979 Host machine cpu family: x86_64 00:01:10.979 Host machine cpu: x86_64 00:01:10.979 Message: ## Building in Developer Mode ## 00:01:10.979 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:10.979 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:10.979 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:10.979 Program python3 found: YES (/usr/bin/python3) 00:01:10.979 Program cat found: YES (/usr/bin/cat) 00:01:10.979 Compiler for C supports arguments -march=native: YES 00:01:10.979 Checking for size of "void *" : 8 00:01:10.979 Checking for size of "void *" : 8 (cached) 00:01:10.979 Library m found: YES 00:01:10.980 Library numa found: YES 00:01:10.980 Has header "numaif.h" : YES 00:01:10.980 Library fdt found: NO 00:01:10.980 Library execinfo found: NO 00:01:10.980 Has header "execinfo.h" : YES 00:01:10.980 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:10.980 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:10.980 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:10.980 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:10.980 Run-time dependency openssl found: YES 3.0.9 00:01:10.980 Run-time dependency libpcap found: YES 1.10.4 00:01:10.980 Has header "pcap.h" with dependency libpcap: YES 00:01:10.980 Compiler for C supports arguments -Wcast-qual: YES 00:01:10.980 Compiler for C supports arguments -Wdeprecated: YES 00:01:10.980 Compiler for C supports arguments -Wformat: YES 00:01:10.980 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:10.980 Compiler for C supports arguments -Wformat-security: NO 00:01:10.980 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:10.980 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:10.980 Compiler for C supports arguments -Wnested-externs: YES 00:01:10.980 Compiler for C supports arguments -Wold-style-definition: YES 00:01:10.980 Compiler for C supports arguments -Wpointer-arith: YES 00:01:10.980 Compiler for C supports arguments -Wsign-compare: YES 00:01:10.980 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:10.980 Compiler for C supports arguments -Wundef: YES 00:01:10.980 Compiler for C supports arguments -Wwrite-strings: YES 00:01:10.980 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:10.980 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:10.980 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:10.980 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:10.980 Program objdump found: YES (/usr/bin/objdump) 00:01:10.980 Compiler for C supports arguments -mavx512f: YES 00:01:10.980 Checking if "AVX512 checking" compiles: YES 00:01:10.980 Fetching value of define "__SSE4_2__" : 1 00:01:10.980 Fetching value of define "__AES__" : 1 00:01:10.980 Fetching value of define "__AVX__" : 1 00:01:10.980 Fetching value of define "__AVX2__" : 1 00:01:10.980 Fetching value of define "__AVX512BW__" : 1 00:01:10.980 Fetching value of define "__AVX512CD__" : 1 00:01:10.980 Fetching value of define "__AVX512DQ__" : 1 00:01:10.980 Fetching value of define "__AVX512F__" : 1 00:01:10.980 Fetching value of define "__AVX512VL__" : 1 00:01:10.980 Fetching value of define "__PCLMUL__" : 1 00:01:10.980 Fetching value of define "__RDRND__" : 1 00:01:10.980 Fetching value of define "__RDSEED__" : 1 00:01:10.980 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:10.980 Fetching value of define "__znver1__" : (undefined) 00:01:10.980 Fetching value of define "__znver2__" : (undefined) 00:01:10.980 Fetching value of define "__znver3__" : (undefined) 00:01:10.980 Fetching value of define "__znver4__" : (undefined) 00:01:10.980 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:10.980 Message: lib/log: Defining dependency "log" 00:01:10.980 Message: lib/kvargs: Defining dependency "kvargs" 00:01:10.980 Message: lib/telemetry: Defining dependency "telemetry" 00:01:10.980 Checking for function "getentropy" : NO 00:01:10.980 Message: lib/eal: Defining dependency "eal" 00:01:10.980 Message: lib/ring: Defining dependency "ring" 00:01:10.980 Message: lib/rcu: Defining dependency "rcu" 00:01:10.980 Message: lib/mempool: Defining dependency "mempool" 00:01:10.980 Message: lib/mbuf: Defining dependency "mbuf" 00:01:10.980 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:10.980 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:10.980 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:10.980 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:10.980 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:10.980 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:10.980 Compiler for C supports arguments -mpclmul: YES 00:01:10.980 Compiler for C supports arguments -maes: YES 00:01:10.980 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:10.980 Compiler for C supports arguments -mavx512bw: YES 00:01:10.980 Compiler for C supports arguments -mavx512dq: YES 00:01:10.980 Compiler for C supports arguments -mavx512vl: YES 00:01:10.980 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:10.980 Compiler for C supports arguments -mavx2: YES 00:01:10.980 Compiler for C supports arguments -mavx: YES 00:01:10.980 Message: lib/net: Defining dependency "net" 00:01:10.980 Message: lib/meter: Defining dependency "meter" 00:01:10.980 Message: lib/ethdev: Defining dependency "ethdev" 00:01:10.980 Message: lib/pci: Defining dependency "pci" 00:01:10.980 Message: lib/cmdline: Defining dependency "cmdline" 00:01:10.980 Message: lib/hash: Defining dependency "hash" 00:01:10.980 Message: lib/timer: Defining dependency "timer" 00:01:10.980 Message: lib/compressdev: Defining dependency "compressdev" 00:01:10.980 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:10.980 Message: lib/dmadev: Defining dependency "dmadev" 00:01:10.980 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:10.980 Message: lib/power: Defining dependency "power" 00:01:10.980 Message: lib/reorder: Defining dependency "reorder" 00:01:10.980 Message: lib/security: Defining dependency "security" 00:01:10.980 Has header "linux/userfaultfd.h" : YES 00:01:10.980 Has header "linux/vduse.h" : YES 00:01:10.980 Message: lib/vhost: Defining dependency "vhost" 00:01:10.980 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:10.980 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:10.980 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:10.980 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:10.980 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:10.980 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:10.980 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:10.980 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:10.980 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:10.980 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:10.980 Program doxygen found: YES (/usr/bin/doxygen) 00:01:10.980 Configuring doxy-api-html.conf using configuration 00:01:10.980 Configuring doxy-api-man.conf using configuration 00:01:10.980 Program mandb found: YES (/usr/bin/mandb) 00:01:10.980 Program sphinx-build found: NO 00:01:10.980 Configuring rte_build_config.h using configuration 00:01:10.980 Message: 00:01:10.980 ================= 00:01:10.980 Applications Enabled 00:01:10.980 ================= 00:01:10.980 00:01:10.980 apps: 00:01:10.980 00:01:10.980 00:01:10.980 Message: 00:01:10.980 ================= 00:01:10.980 Libraries Enabled 00:01:10.980 ================= 00:01:10.980 00:01:10.980 libs: 00:01:10.980 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:10.980 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:10.980 cryptodev, dmadev, power, reorder, security, vhost, 00:01:10.980 00:01:10.980 Message: 00:01:10.980 =============== 00:01:10.980 Drivers Enabled 00:01:10.980 =============== 00:01:10.980 00:01:10.980 common: 00:01:10.980 00:01:10.980 bus: 00:01:10.980 pci, vdev, 00:01:10.980 mempool: 00:01:10.980 ring, 00:01:10.980 dma: 00:01:10.980 00:01:10.980 net: 00:01:10.980 00:01:10.980 crypto: 00:01:10.980 00:01:10.980 compress: 00:01:10.980 00:01:10.980 vdpa: 00:01:10.980 00:01:10.980 00:01:10.980 Message: 00:01:10.980 ================= 00:01:10.980 Content Skipped 00:01:10.980 ================= 00:01:10.980 00:01:10.980 apps: 00:01:10.980 dumpcap: explicitly disabled via build config 00:01:10.980 graph: explicitly disabled via build config 00:01:10.980 pdump: explicitly disabled via build config 00:01:10.980 proc-info: explicitly disabled via build config 00:01:10.980 test-acl: explicitly disabled via build config 00:01:10.980 test-bbdev: explicitly disabled via build config 00:01:10.980 test-cmdline: explicitly disabled via build config 00:01:10.980 test-compress-perf: explicitly disabled via build config 00:01:10.980 test-crypto-perf: explicitly disabled via build config 00:01:10.980 test-dma-perf: explicitly disabled via build config 00:01:10.980 test-eventdev: explicitly disabled via build config 00:01:10.980 test-fib: explicitly disabled via build config 00:01:10.980 test-flow-perf: explicitly disabled via build config 00:01:10.980 test-gpudev: explicitly disabled via build config 00:01:10.980 test-mldev: explicitly disabled via build config 00:01:10.980 test-pipeline: explicitly disabled via build config 00:01:10.980 test-pmd: explicitly disabled via build config 00:01:10.980 test-regex: explicitly disabled via build config 00:01:10.980 test-sad: explicitly disabled via build config 00:01:10.980 test-security-perf: explicitly disabled via build config 00:01:10.980 00:01:10.980 libs: 00:01:10.980 metrics: explicitly disabled via build config 00:01:10.980 acl: explicitly disabled via build config 00:01:10.980 bbdev: explicitly disabled via build config 00:01:10.980 bitratestats: explicitly disabled via build config 00:01:10.980 bpf: explicitly disabled via build config 00:01:10.980 cfgfile: explicitly disabled via build config 00:01:10.980 distributor: explicitly disabled via build config 00:01:10.980 efd: explicitly disabled via build config 00:01:10.980 eventdev: explicitly disabled via build config 00:01:10.980 dispatcher: explicitly disabled via build config 00:01:10.980 gpudev: explicitly disabled via build config 00:01:10.980 gro: explicitly disabled via build config 00:01:10.980 gso: explicitly disabled via build config 00:01:10.980 ip_frag: explicitly disabled via build config 00:01:10.980 jobstats: explicitly disabled via build config 00:01:10.980 latencystats: explicitly disabled via build config 00:01:10.980 lpm: explicitly disabled via build config 00:01:10.980 member: explicitly disabled via build config 00:01:10.980 pcapng: explicitly disabled via build config 00:01:10.980 rawdev: explicitly disabled via build config 00:01:10.980 regexdev: explicitly disabled via build config 00:01:10.980 mldev: explicitly disabled via build config 00:01:10.980 rib: explicitly disabled via build config 00:01:10.980 sched: explicitly disabled via build config 00:01:10.980 stack: explicitly disabled via build config 00:01:10.980 ipsec: explicitly disabled via build config 00:01:10.981 pdcp: explicitly disabled via build config 00:01:10.981 fib: explicitly disabled via build config 00:01:10.981 port: explicitly disabled via build config 00:01:10.981 pdump: explicitly disabled via build config 00:01:10.981 table: explicitly disabled via build config 00:01:10.981 pipeline: explicitly disabled via build config 00:01:10.981 graph: explicitly disabled via build config 00:01:10.981 node: explicitly disabled via build config 00:01:10.981 00:01:10.981 drivers: 00:01:10.981 common/cpt: not in enabled drivers build config 00:01:10.981 common/dpaax: not in enabled drivers build config 00:01:10.981 common/iavf: not in enabled drivers build config 00:01:10.981 common/idpf: not in enabled drivers build config 00:01:10.981 common/mvep: not in enabled drivers build config 00:01:10.981 common/octeontx: not in enabled drivers build config 00:01:10.981 bus/auxiliary: not in enabled drivers build config 00:01:10.981 bus/cdx: not in enabled drivers build config 00:01:10.981 bus/dpaa: not in enabled drivers build config 00:01:10.981 bus/fslmc: not in enabled drivers build config 00:01:10.981 bus/ifpga: not in enabled drivers build config 00:01:10.981 bus/platform: not in enabled drivers build config 00:01:10.981 bus/vmbus: not in enabled drivers build config 00:01:10.981 common/cnxk: not in enabled drivers build config 00:01:10.981 common/mlx5: not in enabled drivers build config 00:01:10.981 common/nfp: not in enabled drivers build config 00:01:10.981 common/qat: not in enabled drivers build config 00:01:10.981 common/sfc_efx: not in enabled drivers build config 00:01:10.981 mempool/bucket: not in enabled drivers build config 00:01:10.981 mempool/cnxk: not in enabled drivers build config 00:01:10.981 mempool/dpaa: not in enabled drivers build config 00:01:10.981 mempool/dpaa2: not in enabled drivers build config 00:01:10.981 mempool/octeontx: not in enabled drivers build config 00:01:10.981 mempool/stack: not in enabled drivers build config 00:01:10.981 dma/cnxk: not in enabled drivers build config 00:01:10.981 dma/dpaa: not in enabled drivers build config 00:01:10.981 dma/dpaa2: not in enabled drivers build config 00:01:10.981 dma/hisilicon: not in enabled drivers build config 00:01:10.981 dma/idxd: not in enabled drivers build config 00:01:10.981 dma/ioat: not in enabled drivers build config 00:01:10.981 dma/skeleton: not in enabled drivers build config 00:01:10.981 net/af_packet: not in enabled drivers build config 00:01:10.981 net/af_xdp: not in enabled drivers build config 00:01:10.981 net/ark: not in enabled drivers build config 00:01:10.981 net/atlantic: not in enabled drivers build config 00:01:10.981 net/avp: not in enabled drivers build config 00:01:10.981 net/axgbe: not in enabled drivers build config 00:01:10.981 net/bnx2x: not in enabled drivers build config 00:01:10.981 net/bnxt: not in enabled drivers build config 00:01:10.981 net/bonding: not in enabled drivers build config 00:01:10.981 net/cnxk: not in enabled drivers build config 00:01:10.981 net/cpfl: not in enabled drivers build config 00:01:10.981 net/cxgbe: not in enabled drivers build config 00:01:10.981 net/dpaa: not in enabled drivers build config 00:01:10.981 net/dpaa2: not in enabled drivers build config 00:01:10.981 net/e1000: not in enabled drivers build config 00:01:10.981 net/ena: not in enabled drivers build config 00:01:10.981 net/enetc: not in enabled drivers build config 00:01:10.981 net/enetfec: not in enabled drivers build config 00:01:10.981 net/enic: not in enabled drivers build config 00:01:10.981 net/failsafe: not in enabled drivers build config 00:01:10.981 net/fm10k: not in enabled drivers build config 00:01:10.981 net/gve: not in enabled drivers build config 00:01:10.981 net/hinic: not in enabled drivers build config 00:01:10.981 net/hns3: not in enabled drivers build config 00:01:10.981 net/i40e: not in enabled drivers build config 00:01:10.981 net/iavf: not in enabled drivers build config 00:01:10.981 net/ice: not in enabled drivers build config 00:01:10.981 net/idpf: not in enabled drivers build config 00:01:10.981 net/igc: not in enabled drivers build config 00:01:10.981 net/ionic: not in enabled drivers build config 00:01:10.981 net/ipn3ke: not in enabled drivers build config 00:01:10.981 net/ixgbe: not in enabled drivers build config 00:01:10.981 net/mana: not in enabled drivers build config 00:01:10.981 net/memif: not in enabled drivers build config 00:01:10.981 net/mlx4: not in enabled drivers build config 00:01:10.981 net/mlx5: not in enabled drivers build config 00:01:10.981 net/mvneta: not in enabled drivers build config 00:01:10.981 net/mvpp2: not in enabled drivers build config 00:01:10.981 net/netvsc: not in enabled drivers build config 00:01:10.981 net/nfb: not in enabled drivers build config 00:01:10.981 net/nfp: not in enabled drivers build config 00:01:10.981 net/ngbe: not in enabled drivers build config 00:01:10.981 net/null: not in enabled drivers build config 00:01:10.981 net/octeontx: not in enabled drivers build config 00:01:10.981 net/octeon_ep: not in enabled drivers build config 00:01:10.981 net/pcap: not in enabled drivers build config 00:01:10.981 net/pfe: not in enabled drivers build config 00:01:10.981 net/qede: not in enabled drivers build config 00:01:10.981 net/ring: not in enabled drivers build config 00:01:10.981 net/sfc: not in enabled drivers build config 00:01:10.981 net/softnic: not in enabled drivers build config 00:01:10.981 net/tap: not in enabled drivers build config 00:01:10.981 net/thunderx: not in enabled drivers build config 00:01:10.981 net/txgbe: not in enabled drivers build config 00:01:10.981 net/vdev_netvsc: not in enabled drivers build config 00:01:10.981 net/vhost: not in enabled drivers build config 00:01:10.981 net/virtio: not in enabled drivers build config 00:01:10.981 net/vmxnet3: not in enabled drivers build config 00:01:10.981 raw/*: missing internal dependency, "rawdev" 00:01:10.981 crypto/armv8: not in enabled drivers build config 00:01:10.981 crypto/bcmfs: not in enabled drivers build config 00:01:10.981 crypto/caam_jr: not in enabled drivers build config 00:01:10.981 crypto/ccp: not in enabled drivers build config 00:01:10.981 crypto/cnxk: not in enabled drivers build config 00:01:10.981 crypto/dpaa_sec: not in enabled drivers build config 00:01:10.981 crypto/dpaa2_sec: not in enabled drivers build config 00:01:10.981 crypto/ipsec_mb: not in enabled drivers build config 00:01:10.981 crypto/mlx5: not in enabled drivers build config 00:01:10.981 crypto/mvsam: not in enabled drivers build config 00:01:10.981 crypto/nitrox: not in enabled drivers build config 00:01:10.981 crypto/null: not in enabled drivers build config 00:01:10.981 crypto/octeontx: not in enabled drivers build config 00:01:10.981 crypto/openssl: not in enabled drivers build config 00:01:10.981 crypto/scheduler: not in enabled drivers build config 00:01:10.981 crypto/uadk: not in enabled drivers build config 00:01:10.981 crypto/virtio: not in enabled drivers build config 00:01:10.981 compress/isal: not in enabled drivers build config 00:01:10.981 compress/mlx5: not in enabled drivers build config 00:01:10.981 compress/octeontx: not in enabled drivers build config 00:01:10.981 compress/zlib: not in enabled drivers build config 00:01:10.981 regex/*: missing internal dependency, "regexdev" 00:01:10.981 ml/*: missing internal dependency, "mldev" 00:01:10.981 vdpa/ifc: not in enabled drivers build config 00:01:10.981 vdpa/mlx5: not in enabled drivers build config 00:01:10.981 vdpa/nfp: not in enabled drivers build config 00:01:10.981 vdpa/sfc: not in enabled drivers build config 00:01:10.981 event/*: missing internal dependency, "eventdev" 00:01:10.981 baseband/*: missing internal dependency, "bbdev" 00:01:10.981 gpu/*: missing internal dependency, "gpudev" 00:01:10.981 00:01:10.981 00:01:11.241 Build targets in project: 85 00:01:11.241 00:01:11.241 DPDK 23.11.0 00:01:11.241 00:01:11.241 User defined options 00:01:11.241 buildtype : debug 00:01:11.241 default_library : shared 00:01:11.241 libdir : lib 00:01:11.241 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:11.241 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:11.241 c_link_args : 00:01:11.241 cpu_instruction_set: native 00:01:11.241 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:01:11.241 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:01:11.241 enable_docs : false 00:01:11.241 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:11.241 enable_kmods : false 00:01:11.241 tests : false 00:01:11.241 00:01:11.241 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:11.821 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:11.821 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:11.821 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:11.821 [3/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:11.821 [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:11.821 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:11.821 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:11.821 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:11.821 [8/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:11.821 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:11.821 [10/265] Linking static target lib/librte_kvargs.a 00:01:11.821 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:11.821 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:11.821 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:11.821 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:11.821 [15/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:11.821 [16/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:11.821 [17/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:12.084 [18/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:12.084 [19/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:12.084 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:12.084 [21/265] Linking static target lib/librte_log.a 00:01:12.084 [22/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:12.084 [23/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:12.084 [24/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:12.084 [25/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:12.084 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:12.084 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:12.084 [28/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:12.084 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:12.084 [30/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:12.084 [31/265] Linking static target lib/librte_pci.a 00:01:12.084 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:12.084 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:12.084 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:12.084 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:12.084 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:12.084 [37/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:12.084 [38/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:12.084 [39/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:12.084 [40/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:12.342 [41/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:12.342 [42/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:12.342 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:12.342 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:12.342 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:12.342 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:12.342 [47/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:12.342 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:12.342 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:12.342 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:12.342 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:12.342 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:12.342 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:12.342 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:12.342 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:12.342 [56/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:12.342 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:12.342 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:12.342 [59/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:12.342 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:12.342 [61/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:12.343 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:12.343 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:12.343 [64/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:12.343 [65/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:12.343 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:12.343 [67/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.343 [68/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:12.343 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:12.343 [70/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:12.343 [71/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:12.343 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:12.343 [73/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:12.343 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:12.343 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:12.343 [76/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.343 [77/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:12.343 [78/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:12.343 [79/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:12.343 [80/265] Linking static target lib/librte_meter.a 00:01:12.343 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:12.343 [82/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:12.343 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:12.343 [84/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:12.343 [85/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:12.343 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:12.343 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:12.343 [88/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:12.343 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:12.343 [90/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:12.343 [91/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:12.343 [92/265] Linking static target lib/librte_ring.a 00:01:12.343 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:12.343 [94/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:12.343 [95/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:12.343 [96/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:12.343 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:12.601 [98/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:12.601 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:12.601 [100/265] Linking static target lib/librte_telemetry.a 00:01:12.601 [101/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:12.601 [102/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:12.601 [103/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:12.601 [104/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:12.601 [105/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:12.601 [106/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:12.601 [107/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:12.601 [108/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:12.601 [109/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:12.601 [110/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:12.601 [111/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:12.601 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:12.601 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:12.601 [114/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:12.601 [115/265] Linking static target lib/librte_mempool.a 00:01:12.601 [116/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:12.601 [117/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:12.601 [118/265] Linking static target lib/librte_cmdline.a 00:01:12.601 [119/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:12.601 [120/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:12.601 [121/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:12.601 [122/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:12.601 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:12.601 [124/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:12.601 [125/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:12.601 [126/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:12.601 [127/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:12.601 [128/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:12.601 [129/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:12.601 [130/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:12.601 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:12.601 [132/265] Linking static target lib/librte_timer.a 00:01:12.601 [133/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:12.601 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:12.601 [135/265] Linking static target lib/librte_net.a 00:01:12.601 [136/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:12.601 [137/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:12.601 [138/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:12.601 [139/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:12.601 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:12.601 [141/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:12.601 [142/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:12.601 [143/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:12.601 [144/265] Linking static target lib/librte_compressdev.a 00:01:12.601 [145/265] Linking static target lib/librte_dmadev.a 00:01:12.601 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:12.601 [147/265] Linking static target lib/librte_eal.a 00:01:12.601 [148/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:12.601 [149/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:12.601 [150/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:12.601 [151/265] Linking static target lib/librte_rcu.a 00:01:12.601 [152/265] Linking static target lib/librte_power.a 00:01:12.601 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:12.601 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:12.601 [155/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.601 [156/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:12.860 [157/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:12.860 [158/265] Linking static target lib/librte_security.a 00:01:12.860 [159/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:12.860 [160/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:12.860 [161/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:12.860 [162/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:12.860 [163/265] Linking static target lib/librte_reorder.a 00:01:12.860 [164/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:12.860 [165/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:12.860 [166/265] Linking target lib/librte_log.so.24.0 00:01:12.860 [167/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:12.860 [168/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.860 [169/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:12.860 [170/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:12.860 [171/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:12.860 [172/265] Linking static target lib/librte_mbuf.a 00:01:12.860 [173/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:12.860 [174/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.860 [175/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:12.860 [176/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:12.860 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:12.860 [178/265] Linking static target lib/librte_hash.a 00:01:12.860 [179/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:12.860 [180/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:12.860 [181/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:12.860 [182/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:12.860 [183/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:12.860 [184/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:12.860 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:12.860 [186/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.860 [187/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:12.860 [188/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:12.860 [189/265] Linking target lib/librte_kvargs.so.24.0 00:01:12.860 [190/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:12.860 [191/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:12.860 [192/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:13.138 [193/265] Linking static target lib/librte_cryptodev.a 00:01:13.138 [194/265] Linking static target drivers/librte_bus_vdev.a 00:01:13.138 [195/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.138 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:13.138 [197/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.138 [198/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.138 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:13.138 [200/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:13.138 [201/265] Linking target lib/librte_telemetry.so.24.0 00:01:13.138 [202/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:13.138 [203/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:13.138 [204/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:13.138 [205/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:13.138 [206/265] Linking static target drivers/librte_mempool_ring.a 00:01:13.138 [207/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.138 [208/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:13.138 [209/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:13.138 [210/265] Linking static target drivers/librte_bus_pci.a 00:01:13.138 [211/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:13.138 [212/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.428 [213/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:13.428 [214/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.428 [215/265] Linking static target lib/librte_ethdev.a 00:01:13.428 [216/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.428 [217/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.428 [218/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.428 [219/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:13.687 [220/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.687 [221/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.687 [222/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.945 [223/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.945 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.514 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:14.514 [226/265] Linking static target lib/librte_vhost.a 00:01:15.083 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.993 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.266 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.561 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.561 [231/265] Linking target lib/librte_eal.so.24.0 00:01:25.561 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:25.561 [233/265] Linking target lib/librte_ring.so.24.0 00:01:25.561 [234/265] Linking target lib/librte_meter.so.24.0 00:01:25.561 [235/265] Linking target lib/librte_pci.so.24.0 00:01:25.561 [236/265] Linking target lib/librte_dmadev.so.24.0 00:01:25.561 [237/265] Linking target lib/librte_timer.so.24.0 00:01:25.561 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:25.561 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:25.561 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:25.561 [241/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:25.561 [242/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:25.561 [243/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:25.561 [244/265] Linking target lib/librte_rcu.so.24.0 00:01:25.561 [245/265] Linking target lib/librte_mempool.so.24.0 00:01:25.561 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:25.561 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:25.561 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:25.820 [249/265] Linking target lib/librte_mbuf.so.24.0 00:01:25.820 [250/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:25.820 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:25.820 [252/265] Linking target lib/librte_compressdev.so.24.0 00:01:25.820 [253/265] Linking target lib/librte_net.so.24.0 00:01:25.820 [254/265] Linking target lib/librte_reorder.so.24.0 00:01:25.820 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:01:26.079 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:26.079 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:26.079 [258/265] Linking target lib/librte_hash.so.24.0 00:01:26.079 [259/265] Linking target lib/librte_cmdline.so.24.0 00:01:26.079 [260/265] Linking target lib/librte_security.so.24.0 00:01:26.079 [261/265] Linking target lib/librte_ethdev.so.24.0 00:01:26.358 [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:26.358 [263/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:26.358 [264/265] Linking target lib/librte_power.so.24.0 00:01:26.358 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:26.358 INFO: autodetecting backend as ninja 00:01:26.358 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:27.346 CC lib/ut_mock/mock.o 00:01:27.346 CC lib/log/log.o 00:01:27.346 CC lib/log/log_flags.o 00:01:27.346 CC lib/log/log_deprecated.o 00:01:27.346 CC lib/ut/ut.o 00:01:27.346 LIB libspdk_ut_mock.a 00:01:27.604 SO libspdk_ut_mock.so.6.0 00:01:27.604 LIB libspdk_log.a 00:01:27.604 LIB libspdk_ut.a 00:01:27.604 SO libspdk_log.so.7.0 00:01:27.604 SO libspdk_ut.so.2.0 00:01:27.604 SYMLINK libspdk_ut_mock.so 00:01:27.605 SYMLINK libspdk_log.so 00:01:27.605 SYMLINK libspdk_ut.so 00:01:27.862 CC lib/util/base64.o 00:01:27.862 CC lib/util/bit_array.o 00:01:27.862 CC lib/util/cpuset.o 00:01:27.862 CC lib/util/crc32.o 00:01:27.862 CC lib/util/crc16.o 00:01:27.862 CC lib/util/crc32_ieee.o 00:01:27.862 CC lib/util/crc32c.o 00:01:27.862 CC lib/util/crc64.o 00:01:27.862 CC lib/util/dif.o 00:01:27.862 CC lib/util/fd.o 00:01:27.862 CC lib/util/file.o 00:01:27.862 CC lib/util/iov.o 00:01:27.862 CC lib/util/hexlify.o 00:01:27.862 CC lib/util/math.o 00:01:27.862 CC lib/util/pipe.o 00:01:27.862 CC lib/util/strerror_tls.o 00:01:27.862 CC lib/util/string.o 00:01:27.862 CC lib/util/fd_group.o 00:01:27.862 CC lib/dma/dma.o 00:01:27.862 CC lib/util/uuid.o 00:01:27.862 CC lib/util/xor.o 00:01:27.862 CC lib/util/zipf.o 00:01:27.862 CC lib/ioat/ioat.o 00:01:27.862 CXX lib/trace_parser/trace.o 00:01:28.120 CC lib/vfio_user/host/vfio_user_pci.o 00:01:28.120 CC lib/vfio_user/host/vfio_user.o 00:01:28.120 LIB libspdk_dma.a 00:01:28.120 SO libspdk_dma.so.4.0 00:01:28.120 LIB libspdk_ioat.a 00:01:28.120 SO libspdk_ioat.so.7.0 00:01:28.120 SYMLINK libspdk_dma.so 00:01:28.378 SYMLINK libspdk_ioat.so 00:01:28.378 LIB libspdk_vfio_user.a 00:01:28.378 LIB libspdk_util.a 00:01:28.378 SO libspdk_vfio_user.so.5.0 00:01:28.378 SYMLINK libspdk_vfio_user.so 00:01:28.378 SO libspdk_util.so.9.0 00:01:28.635 SYMLINK libspdk_util.so 00:01:28.635 LIB libspdk_trace_parser.a 00:01:28.635 SO libspdk_trace_parser.so.5.0 00:01:28.895 SYMLINK libspdk_trace_parser.so 00:01:28.895 CC lib/json/json_parse.o 00:01:28.895 CC lib/json/json_write.o 00:01:28.895 CC lib/conf/conf.o 00:01:28.895 CC lib/json/json_util.o 00:01:28.895 CC lib/vmd/vmd.o 00:01:28.895 CC lib/vmd/led.o 00:01:28.895 CC lib/env_dpdk/env.o 00:01:28.895 CC lib/idxd/idxd.o 00:01:28.895 CC lib/idxd/idxd_user.o 00:01:28.895 CC lib/env_dpdk/memory.o 00:01:28.895 CC lib/env_dpdk/pci.o 00:01:28.895 CC lib/env_dpdk/init.o 00:01:28.895 CC lib/env_dpdk/threads.o 00:01:28.895 CC lib/env_dpdk/pci_ioat.o 00:01:28.895 CC lib/rdma/rdma_verbs.o 00:01:28.895 CC lib/env_dpdk/pci_virtio.o 00:01:28.895 CC lib/rdma/common.o 00:01:28.895 CC lib/env_dpdk/pci_vmd.o 00:01:28.895 CC lib/env_dpdk/pci_idxd.o 00:01:28.895 CC lib/env_dpdk/sigbus_handler.o 00:01:28.895 CC lib/env_dpdk/pci_event.o 00:01:28.895 CC lib/env_dpdk/pci_dpdk.o 00:01:28.895 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:28.895 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:29.154 LIB libspdk_conf.a 00:01:29.154 LIB libspdk_json.a 00:01:29.154 SO libspdk_conf.so.6.0 00:01:29.154 SO libspdk_json.so.6.0 00:01:29.154 LIB libspdk_rdma.a 00:01:29.154 SYMLINK libspdk_conf.so 00:01:29.154 SO libspdk_rdma.so.6.0 00:01:29.154 SYMLINK libspdk_json.so 00:01:29.412 SYMLINK libspdk_rdma.so 00:01:29.412 LIB libspdk_idxd.a 00:01:29.412 SO libspdk_idxd.so.12.0 00:01:29.412 LIB libspdk_vmd.a 00:01:29.412 SYMLINK libspdk_idxd.so 00:01:29.412 SO libspdk_vmd.so.6.0 00:01:29.671 SYMLINK libspdk_vmd.so 00:01:29.671 CC lib/jsonrpc/jsonrpc_server.o 00:01:29.671 CC lib/jsonrpc/jsonrpc_client.o 00:01:29.671 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:29.671 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:29.929 LIB libspdk_jsonrpc.a 00:01:29.929 SO libspdk_jsonrpc.so.6.0 00:01:29.929 LIB libspdk_env_dpdk.a 00:01:29.929 SYMLINK libspdk_jsonrpc.so 00:01:29.929 SO libspdk_env_dpdk.so.14.0 00:01:30.187 SYMLINK libspdk_env_dpdk.so 00:01:30.187 CC lib/rpc/rpc.o 00:01:30.445 LIB libspdk_rpc.a 00:01:30.445 SO libspdk_rpc.so.6.0 00:01:30.703 SYMLINK libspdk_rpc.so 00:01:30.962 CC lib/trace/trace_flags.o 00:01:30.962 CC lib/trace/trace.o 00:01:30.962 CC lib/trace/trace_rpc.o 00:01:30.962 CC lib/notify/notify.o 00:01:30.962 CC lib/notify/notify_rpc.o 00:01:30.962 CC lib/keyring/keyring.o 00:01:30.962 CC lib/keyring/keyring_rpc.o 00:01:31.220 LIB libspdk_notify.a 00:01:31.220 SO libspdk_notify.so.6.0 00:01:31.220 LIB libspdk_keyring.a 00:01:31.220 LIB libspdk_trace.a 00:01:31.220 SO libspdk_keyring.so.1.0 00:01:31.220 SO libspdk_trace.so.10.0 00:01:31.220 SYMLINK libspdk_notify.so 00:01:31.220 SYMLINK libspdk_keyring.so 00:01:31.220 SYMLINK libspdk_trace.so 00:01:31.788 CC lib/sock/sock.o 00:01:31.788 CC lib/sock/sock_rpc.o 00:01:31.788 CC lib/thread/thread.o 00:01:31.788 CC lib/thread/iobuf.o 00:01:32.047 LIB libspdk_sock.a 00:01:32.047 SO libspdk_sock.so.9.0 00:01:32.047 SYMLINK libspdk_sock.so 00:01:32.306 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:32.306 CC lib/nvme/nvme_ctrlr.o 00:01:32.306 CC lib/nvme/nvme_fabric.o 00:01:32.306 CC lib/nvme/nvme_ns_cmd.o 00:01:32.306 CC lib/nvme/nvme_ns.o 00:01:32.306 CC lib/nvme/nvme_pcie_common.o 00:01:32.306 CC lib/nvme/nvme_pcie.o 00:01:32.306 CC lib/nvme/nvme_qpair.o 00:01:32.306 CC lib/nvme/nvme.o 00:01:32.306 CC lib/nvme/nvme_quirks.o 00:01:32.566 CC lib/nvme/nvme_transport.o 00:01:32.566 CC lib/nvme/nvme_discovery.o 00:01:32.566 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:32.566 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:32.566 CC lib/nvme/nvme_tcp.o 00:01:32.566 CC lib/nvme/nvme_opal.o 00:01:32.566 CC lib/nvme/nvme_io_msg.o 00:01:32.566 CC lib/nvme/nvme_poll_group.o 00:01:32.566 CC lib/nvme/nvme_zns.o 00:01:32.566 CC lib/nvme/nvme_stubs.o 00:01:32.566 CC lib/nvme/nvme_auth.o 00:01:32.566 CC lib/nvme/nvme_cuse.o 00:01:32.566 CC lib/nvme/nvme_vfio_user.o 00:01:32.566 CC lib/nvme/nvme_rdma.o 00:01:32.825 LIB libspdk_thread.a 00:01:32.825 SO libspdk_thread.so.10.0 00:01:32.825 SYMLINK libspdk_thread.so 00:01:33.084 CC lib/accel/accel.o 00:01:33.084 CC lib/accel/accel_rpc.o 00:01:33.084 CC lib/accel/accel_sw.o 00:01:33.084 CC lib/virtio/virtio.o 00:01:33.084 CC lib/virtio/virtio_vhost_user.o 00:01:33.084 CC lib/virtio/virtio_vfio_user.o 00:01:33.084 CC lib/virtio/virtio_pci.o 00:01:33.084 CC lib/blob/blobstore.o 00:01:33.084 CC lib/blob/request.o 00:01:33.084 CC lib/blob/zeroes.o 00:01:33.084 CC lib/vfu_tgt/tgt_endpoint.o 00:01:33.084 CC lib/blob/blob_bs_dev.o 00:01:33.084 CC lib/vfu_tgt/tgt_rpc.o 00:01:33.084 CC lib/init/json_config.o 00:01:33.084 CC lib/init/rpc.o 00:01:33.084 CC lib/init/subsystem.o 00:01:33.084 CC lib/init/subsystem_rpc.o 00:01:33.343 LIB libspdk_init.a 00:01:33.343 SO libspdk_init.so.5.0 00:01:33.343 LIB libspdk_virtio.a 00:01:33.343 LIB libspdk_vfu_tgt.a 00:01:33.602 SO libspdk_virtio.so.7.0 00:01:33.602 SYMLINK libspdk_init.so 00:01:33.602 SO libspdk_vfu_tgt.so.3.0 00:01:33.602 SYMLINK libspdk_virtio.so 00:01:33.602 SYMLINK libspdk_vfu_tgt.so 00:01:33.861 CC lib/event/app.o 00:01:33.861 CC lib/event/reactor.o 00:01:33.861 CC lib/event/log_rpc.o 00:01:33.861 CC lib/event/app_rpc.o 00:01:33.861 CC lib/event/scheduler_static.o 00:01:33.861 LIB libspdk_accel.a 00:01:33.861 SO libspdk_accel.so.15.0 00:01:33.861 LIB libspdk_nvme.a 00:01:34.120 SYMLINK libspdk_accel.so 00:01:34.120 SO libspdk_nvme.so.13.0 00:01:34.120 LIB libspdk_event.a 00:01:34.120 SO libspdk_event.so.13.0 00:01:34.120 SYMLINK libspdk_event.so 00:01:34.379 CC lib/bdev/bdev.o 00:01:34.379 CC lib/bdev/bdev_rpc.o 00:01:34.379 CC lib/bdev/bdev_zone.o 00:01:34.379 CC lib/bdev/part.o 00:01:34.379 CC lib/bdev/scsi_nvme.o 00:01:34.379 SYMLINK libspdk_nvme.so 00:01:34.986 LIB libspdk_blob.a 00:01:35.245 SO libspdk_blob.so.11.0 00:01:35.245 SYMLINK libspdk_blob.so 00:01:35.503 CC lib/lvol/lvol.o 00:01:35.503 CC lib/blobfs/blobfs.o 00:01:35.503 CC lib/blobfs/tree.o 00:01:36.070 LIB libspdk_bdev.a 00:01:36.070 SO libspdk_bdev.so.15.0 00:01:36.070 LIB libspdk_blobfs.a 00:01:36.070 LIB libspdk_lvol.a 00:01:36.070 SO libspdk_blobfs.so.10.0 00:01:36.328 SO libspdk_lvol.so.10.0 00:01:36.328 SYMLINK libspdk_bdev.so 00:01:36.328 SYMLINK libspdk_blobfs.so 00:01:36.328 SYMLINK libspdk_lvol.so 00:01:36.590 CC lib/scsi/dev.o 00:01:36.590 CC lib/scsi/lun.o 00:01:36.590 CC lib/scsi/port.o 00:01:36.590 CC lib/scsi/scsi.o 00:01:36.590 CC lib/scsi/scsi_rpc.o 00:01:36.590 CC lib/scsi/scsi_bdev.o 00:01:36.590 CC lib/scsi/scsi_pr.o 00:01:36.590 CC lib/scsi/task.o 00:01:36.590 CC lib/nvmf/ctrlr.o 00:01:36.590 CC lib/nvmf/ctrlr_bdev.o 00:01:36.590 CC lib/nvmf/ctrlr_discovery.o 00:01:36.590 CC lib/nvmf/subsystem.o 00:01:36.590 CC lib/nvmf/nvmf.o 00:01:36.590 CC lib/nvmf/transport.o 00:01:36.590 CC lib/nvmf/nvmf_rpc.o 00:01:36.590 CC lib/nvmf/tcp.o 00:01:36.590 CC lib/nvmf/vfio_user.o 00:01:36.590 CC lib/nvmf/rdma.o 00:01:36.590 CC lib/nbd/nbd.o 00:01:36.590 CC lib/nbd/nbd_rpc.o 00:01:36.590 CC lib/ftl/ftl_core.o 00:01:36.590 CC lib/ftl/ftl_io.o 00:01:36.590 CC lib/ftl/ftl_init.o 00:01:36.590 CC lib/ftl/ftl_layout.o 00:01:36.590 CC lib/ftl/ftl_debug.o 00:01:36.590 CC lib/ftl/ftl_l2p_flat.o 00:01:36.590 CC lib/ftl/ftl_sb.o 00:01:36.590 CC lib/ftl/ftl_l2p.o 00:01:36.590 CC lib/ftl/ftl_writer.o 00:01:36.590 CC lib/ftl/ftl_band.o 00:01:36.590 CC lib/ftl/ftl_nv_cache.o 00:01:36.590 CC lib/ftl/ftl_rq.o 00:01:36.590 CC lib/ftl/ftl_band_ops.o 00:01:36.590 CC lib/ftl/ftl_reloc.o 00:01:36.590 CC lib/ftl/ftl_l2p_cache.o 00:01:36.590 CC lib/ftl/ftl_p2l.o 00:01:36.590 CC lib/ftl/mngt/ftl_mngt.o 00:01:36.590 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:36.590 CC lib/ublk/ublk_rpc.o 00:01:36.590 CC lib/ublk/ublk.o 00:01:36.590 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:36.590 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:36.590 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:36.590 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:36.590 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:36.590 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:36.590 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:36.590 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:36.590 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:36.590 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:36.590 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:36.590 CC lib/ftl/utils/ftl_conf.o 00:01:36.590 CC lib/ftl/utils/ftl_md.o 00:01:36.590 CC lib/ftl/utils/ftl_mempool.o 00:01:36.590 CC lib/ftl/utils/ftl_property.o 00:01:36.590 CC lib/ftl/utils/ftl_bitmap.o 00:01:36.590 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:36.590 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:36.590 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:36.590 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:36.590 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:36.590 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:36.590 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:36.590 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:36.590 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:36.590 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:36.590 CC lib/ftl/base/ftl_base_bdev.o 00:01:36.590 CC lib/ftl/base/ftl_base_dev.o 00:01:36.590 CC lib/ftl/ftl_trace.o 00:01:37.157 LIB libspdk_nbd.a 00:01:37.157 LIB libspdk_scsi.a 00:01:37.157 SO libspdk_nbd.so.7.0 00:01:37.157 LIB libspdk_ublk.a 00:01:37.157 SO libspdk_ublk.so.3.0 00:01:37.157 SO libspdk_scsi.so.9.0 00:01:37.157 SYMLINK libspdk_nbd.so 00:01:37.415 SYMLINK libspdk_ublk.so 00:01:37.415 SYMLINK libspdk_scsi.so 00:01:37.415 LIB libspdk_ftl.a 00:01:37.674 SO libspdk_ftl.so.9.0 00:01:37.674 CC lib/vhost/vhost_scsi.o 00:01:37.674 CC lib/iscsi/conn.o 00:01:37.674 CC lib/iscsi/iscsi.o 00:01:37.674 CC lib/vhost/vhost.o 00:01:37.674 CC lib/iscsi/init_grp.o 00:01:37.674 CC lib/vhost/rte_vhost_user.o 00:01:37.674 CC lib/vhost/vhost_rpc.o 00:01:37.674 CC lib/vhost/vhost_blk.o 00:01:37.674 CC lib/iscsi/md5.o 00:01:37.674 CC lib/iscsi/param.o 00:01:37.674 CC lib/iscsi/portal_grp.o 00:01:37.674 CC lib/iscsi/tgt_node.o 00:01:37.674 CC lib/iscsi/iscsi_subsystem.o 00:01:37.674 CC lib/iscsi/iscsi_rpc.o 00:01:37.674 CC lib/iscsi/task.o 00:01:37.932 SYMLINK libspdk_ftl.so 00:01:38.190 LIB libspdk_nvmf.a 00:01:38.190 SO libspdk_nvmf.so.18.0 00:01:38.449 SYMLINK libspdk_nvmf.so 00:01:38.449 LIB libspdk_vhost.a 00:01:38.449 SO libspdk_vhost.so.8.0 00:01:38.708 SYMLINK libspdk_vhost.so 00:01:38.708 LIB libspdk_iscsi.a 00:01:38.708 SO libspdk_iscsi.so.8.0 00:01:38.968 SYMLINK libspdk_iscsi.so 00:01:39.536 CC module/env_dpdk/env_dpdk_rpc.o 00:01:39.536 CC module/vfu_device/vfu_virtio.o 00:01:39.536 CC module/vfu_device/vfu_virtio_blk.o 00:01:39.536 CC module/vfu_device/vfu_virtio_scsi.o 00:01:39.536 CC module/vfu_device/vfu_virtio_rpc.o 00:01:39.536 LIB libspdk_env_dpdk_rpc.a 00:01:39.536 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:39.537 CC module/scheduler/gscheduler/gscheduler.o 00:01:39.537 CC module/sock/posix/posix.o 00:01:39.537 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:39.537 CC module/blob/bdev/blob_bdev.o 00:01:39.537 CC module/accel/dsa/accel_dsa.o 00:01:39.537 CC module/accel/dsa/accel_dsa_rpc.o 00:01:39.537 CC module/accel/error/accel_error_rpc.o 00:01:39.537 CC module/accel/error/accel_error.o 00:01:39.537 CC module/keyring/file/keyring_rpc.o 00:01:39.537 CC module/keyring/file/keyring.o 00:01:39.537 CC module/accel/ioat/accel_ioat.o 00:01:39.537 CC module/accel/ioat/accel_ioat_rpc.o 00:01:39.537 CC module/accel/iaa/accel_iaa.o 00:01:39.537 SO libspdk_env_dpdk_rpc.so.6.0 00:01:39.537 CC module/accel/iaa/accel_iaa_rpc.o 00:01:39.795 SYMLINK libspdk_env_dpdk_rpc.so 00:01:39.795 LIB libspdk_scheduler_gscheduler.a 00:01:39.795 LIB libspdk_scheduler_dpdk_governor.a 00:01:39.795 SO libspdk_scheduler_gscheduler.so.4.0 00:01:39.795 LIB libspdk_keyring_file.a 00:01:39.795 LIB libspdk_scheduler_dynamic.a 00:01:39.795 LIB libspdk_accel_ioat.a 00:01:39.795 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:39.795 LIB libspdk_accel_error.a 00:01:39.795 SYMLINK libspdk_scheduler_gscheduler.so 00:01:39.795 SO libspdk_keyring_file.so.1.0 00:01:39.795 LIB libspdk_accel_dsa.a 00:01:39.795 SO libspdk_scheduler_dynamic.so.4.0 00:01:39.796 LIB libspdk_accel_iaa.a 00:01:39.796 SO libspdk_accel_ioat.so.6.0 00:01:39.796 LIB libspdk_blob_bdev.a 00:01:39.796 SO libspdk_accel_error.so.2.0 00:01:39.796 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:39.796 SO libspdk_accel_dsa.so.5.0 00:01:39.796 SO libspdk_accel_iaa.so.3.0 00:01:39.796 SYMLINK libspdk_keyring_file.so 00:01:39.796 SYMLINK libspdk_scheduler_dynamic.so 00:01:39.796 SYMLINK libspdk_accel_ioat.so 00:01:39.796 SO libspdk_blob_bdev.so.11.0 00:01:39.796 LIB libspdk_vfu_device.a 00:01:39.796 SYMLINK libspdk_accel_error.so 00:01:40.054 SYMLINK libspdk_accel_dsa.so 00:01:40.054 SYMLINK libspdk_accel_iaa.so 00:01:40.054 SYMLINK libspdk_blob_bdev.so 00:01:40.054 SO libspdk_vfu_device.so.3.0 00:01:40.055 SYMLINK libspdk_vfu_device.so 00:01:40.055 LIB libspdk_sock_posix.a 00:01:40.314 SO libspdk_sock_posix.so.6.0 00:01:40.314 SYMLINK libspdk_sock_posix.so 00:01:40.577 CC module/bdev/delay/vbdev_delay.o 00:01:40.577 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:40.577 CC module/blobfs/bdev/blobfs_bdev.o 00:01:40.577 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:40.577 CC module/bdev/ftl/bdev_ftl.o 00:01:40.577 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:40.577 CC module/bdev/iscsi/bdev_iscsi.o 00:01:40.577 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:40.577 CC module/bdev/nvme/bdev_nvme.o 00:01:40.577 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:40.577 CC module/bdev/nvme/bdev_mdns_client.o 00:01:40.577 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:40.577 CC module/bdev/malloc/bdev_malloc.o 00:01:40.577 CC module/bdev/nvme/nvme_rpc.o 00:01:40.577 CC module/bdev/nvme/vbdev_opal.o 00:01:40.577 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:40.577 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:40.577 CC module/bdev/error/vbdev_error_rpc.o 00:01:40.577 CC module/bdev/error/vbdev_error.o 00:01:40.577 CC module/bdev/split/vbdev_split.o 00:01:40.577 CC module/bdev/split/vbdev_split_rpc.o 00:01:40.577 CC module/bdev/null/bdev_null.o 00:01:40.577 CC module/bdev/null/bdev_null_rpc.o 00:01:40.577 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:40.577 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:40.577 CC module/bdev/gpt/gpt.o 00:01:40.577 CC module/bdev/gpt/vbdev_gpt.o 00:01:40.577 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:40.577 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:40.577 CC module/bdev/passthru/vbdev_passthru.o 00:01:40.577 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:40.577 CC module/bdev/lvol/vbdev_lvol.o 00:01:40.577 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:40.577 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:40.577 CC module/bdev/raid/bdev_raid.o 00:01:40.577 CC module/bdev/raid/bdev_raid_rpc.o 00:01:40.577 CC module/bdev/aio/bdev_aio.o 00:01:40.577 CC module/bdev/raid/bdev_raid_sb.o 00:01:40.577 CC module/bdev/aio/bdev_aio_rpc.o 00:01:40.577 CC module/bdev/raid/raid0.o 00:01:40.577 CC module/bdev/raid/concat.o 00:01:40.577 CC module/bdev/raid/raid1.o 00:01:40.577 LIB libspdk_blobfs_bdev.a 00:01:40.836 SO libspdk_blobfs_bdev.so.6.0 00:01:40.836 LIB libspdk_bdev_split.a 00:01:40.836 LIB libspdk_bdev_null.a 00:01:40.836 LIB libspdk_bdev_ftl.a 00:01:40.836 LIB libspdk_bdev_error.a 00:01:40.836 SYMLINK libspdk_blobfs_bdev.so 00:01:40.836 LIB libspdk_bdev_gpt.a 00:01:40.836 SO libspdk_bdev_ftl.so.6.0 00:01:40.836 SO libspdk_bdev_error.so.6.0 00:01:40.836 SO libspdk_bdev_split.so.6.0 00:01:40.836 SO libspdk_bdev_null.so.6.0 00:01:40.836 LIB libspdk_bdev_passthru.a 00:01:40.836 LIB libspdk_bdev_delay.a 00:01:40.836 SO libspdk_bdev_gpt.so.6.0 00:01:40.836 LIB libspdk_bdev_zone_block.a 00:01:40.836 LIB libspdk_bdev_malloc.a 00:01:40.836 LIB libspdk_bdev_iscsi.a 00:01:40.836 LIB libspdk_bdev_aio.a 00:01:40.836 SO libspdk_bdev_delay.so.6.0 00:01:40.836 SO libspdk_bdev_passthru.so.6.0 00:01:40.836 SYMLINK libspdk_bdev_ftl.so 00:01:40.836 SYMLINK libspdk_bdev_error.so 00:01:40.836 SYMLINK libspdk_bdev_split.so 00:01:40.836 SYMLINK libspdk_bdev_null.so 00:01:40.836 SO libspdk_bdev_aio.so.6.0 00:01:40.836 SO libspdk_bdev_iscsi.so.6.0 00:01:40.836 SYMLINK libspdk_bdev_gpt.so 00:01:40.836 SO libspdk_bdev_zone_block.so.6.0 00:01:40.836 SO libspdk_bdev_malloc.so.6.0 00:01:40.836 SYMLINK libspdk_bdev_delay.so 00:01:40.836 SYMLINK libspdk_bdev_passthru.so 00:01:40.836 SYMLINK libspdk_bdev_iscsi.so 00:01:40.836 SYMLINK libspdk_bdev_aio.so 00:01:40.836 LIB libspdk_bdev_lvol.a 00:01:40.836 SYMLINK libspdk_bdev_zone_block.so 00:01:41.096 SYMLINK libspdk_bdev_malloc.so 00:01:41.096 LIB libspdk_bdev_virtio.a 00:01:41.096 SO libspdk_bdev_lvol.so.6.0 00:01:41.096 SO libspdk_bdev_virtio.so.6.0 00:01:41.096 SYMLINK libspdk_bdev_lvol.so 00:01:41.096 SYMLINK libspdk_bdev_virtio.so 00:01:41.354 LIB libspdk_bdev_raid.a 00:01:41.354 SO libspdk_bdev_raid.so.6.0 00:01:41.354 SYMLINK libspdk_bdev_raid.so 00:01:41.922 LIB libspdk_bdev_nvme.a 00:01:42.181 SO libspdk_bdev_nvme.so.7.0 00:01:42.181 SYMLINK libspdk_bdev_nvme.so 00:01:43.118 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:43.118 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:43.118 CC module/event/subsystems/sock/sock.o 00:01:43.118 CC module/event/subsystems/iobuf/iobuf.o 00:01:43.118 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:43.118 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:43.118 CC module/event/subsystems/keyring/keyring.o 00:01:43.118 CC module/event/subsystems/vmd/vmd.o 00:01:43.118 CC module/event/subsystems/scheduler/scheduler.o 00:01:43.118 LIB libspdk_event_vfu_tgt.a 00:01:43.118 LIB libspdk_event_scheduler.a 00:01:43.118 LIB libspdk_event_vhost_blk.a 00:01:43.118 LIB libspdk_event_sock.a 00:01:43.118 LIB libspdk_event_keyring.a 00:01:43.118 LIB libspdk_event_vmd.a 00:01:43.118 LIB libspdk_event_iobuf.a 00:01:43.118 SO libspdk_event_vfu_tgt.so.3.0 00:01:43.118 SO libspdk_event_scheduler.so.4.0 00:01:43.118 SO libspdk_event_vhost_blk.so.3.0 00:01:43.118 SO libspdk_event_sock.so.5.0 00:01:43.118 SO libspdk_event_keyring.so.1.0 00:01:43.118 SO libspdk_event_vmd.so.6.0 00:01:43.118 SO libspdk_event_iobuf.so.3.0 00:01:43.118 SYMLINK libspdk_event_scheduler.so 00:01:43.118 SYMLINK libspdk_event_vfu_tgt.so 00:01:43.118 SYMLINK libspdk_event_sock.so 00:01:43.118 SYMLINK libspdk_event_vhost_blk.so 00:01:43.118 SYMLINK libspdk_event_keyring.so 00:01:43.118 SYMLINK libspdk_event_vmd.so 00:01:43.118 SYMLINK libspdk_event_iobuf.so 00:01:43.693 CC module/event/subsystems/accel/accel.o 00:01:43.693 LIB libspdk_event_accel.a 00:01:43.693 SO libspdk_event_accel.so.6.0 00:01:43.693 SYMLINK libspdk_event_accel.so 00:01:44.269 CC module/event/subsystems/bdev/bdev.o 00:01:44.269 LIB libspdk_event_bdev.a 00:01:44.269 SO libspdk_event_bdev.so.6.0 00:01:44.526 SYMLINK libspdk_event_bdev.so 00:01:44.785 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:44.785 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:44.785 CC module/event/subsystems/nbd/nbd.o 00:01:44.785 CC module/event/subsystems/scsi/scsi.o 00:01:44.785 CC module/event/subsystems/ublk/ublk.o 00:01:44.785 LIB libspdk_event_nbd.a 00:01:44.785 LIB libspdk_event_ublk.a 00:01:44.785 LIB libspdk_event_scsi.a 00:01:45.044 SO libspdk_event_nbd.so.6.0 00:01:45.044 SO libspdk_event_ublk.so.3.0 00:01:45.044 LIB libspdk_event_nvmf.a 00:01:45.044 SO libspdk_event_scsi.so.6.0 00:01:45.044 SO libspdk_event_nvmf.so.6.0 00:01:45.044 SYMLINK libspdk_event_nbd.so 00:01:45.044 SYMLINK libspdk_event_ublk.so 00:01:45.044 SYMLINK libspdk_event_scsi.so 00:01:45.044 SYMLINK libspdk_event_nvmf.so 00:01:45.302 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:45.302 CC module/event/subsystems/iscsi/iscsi.o 00:01:45.560 LIB libspdk_event_iscsi.a 00:01:45.560 LIB libspdk_event_vhost_scsi.a 00:01:45.560 SO libspdk_event_vhost_scsi.so.3.0 00:01:45.560 SO libspdk_event_iscsi.so.6.0 00:01:45.560 SYMLINK libspdk_event_vhost_scsi.so 00:01:45.560 SYMLINK libspdk_event_iscsi.so 00:01:45.819 SO libspdk.so.6.0 00:01:45.819 SYMLINK libspdk.so 00:01:46.079 CC app/trace_record/trace_record.o 00:01:46.355 CC app/spdk_nvme_perf/perf.o 00:01:46.355 CXX app/trace/trace.o 00:01:46.355 CC app/spdk_nvme_identify/identify.o 00:01:46.355 CC app/spdk_lspci/spdk_lspci.o 00:01:46.355 CC app/spdk_top/spdk_top.o 00:01:46.355 CC app/spdk_nvme_discover/discovery_aer.o 00:01:46.355 TEST_HEADER include/spdk/accel_module.h 00:01:46.355 CC test/rpc_client/rpc_client_test.o 00:01:46.355 TEST_HEADER include/spdk/barrier.h 00:01:46.355 TEST_HEADER include/spdk/accel.h 00:01:46.355 TEST_HEADER include/spdk/assert.h 00:01:46.355 TEST_HEADER include/spdk/base64.h 00:01:46.355 TEST_HEADER include/spdk/bdev.h 00:01:46.355 TEST_HEADER include/spdk/bdev_module.h 00:01:46.355 TEST_HEADER include/spdk/bdev_zone.h 00:01:46.355 TEST_HEADER include/spdk/bit_array.h 00:01:46.355 TEST_HEADER include/spdk/bit_pool.h 00:01:46.355 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:46.355 TEST_HEADER include/spdk/blob_bdev.h 00:01:46.355 TEST_HEADER include/spdk/blobfs.h 00:01:46.355 TEST_HEADER include/spdk/blob.h 00:01:46.355 TEST_HEADER include/spdk/conf.h 00:01:46.355 TEST_HEADER include/spdk/config.h 00:01:46.355 TEST_HEADER include/spdk/crc16.h 00:01:46.355 TEST_HEADER include/spdk/cpuset.h 00:01:46.355 TEST_HEADER include/spdk/crc32.h 00:01:46.355 TEST_HEADER include/spdk/crc64.h 00:01:46.355 TEST_HEADER include/spdk/dif.h 00:01:46.355 TEST_HEADER include/spdk/dma.h 00:01:46.355 TEST_HEADER include/spdk/endian.h 00:01:46.355 TEST_HEADER include/spdk/env_dpdk.h 00:01:46.355 TEST_HEADER include/spdk/env.h 00:01:46.355 TEST_HEADER include/spdk/fd_group.h 00:01:46.355 TEST_HEADER include/spdk/event.h 00:01:46.355 TEST_HEADER include/spdk/fd.h 00:01:46.355 TEST_HEADER include/spdk/file.h 00:01:46.355 TEST_HEADER include/spdk/ftl.h 00:01:46.355 TEST_HEADER include/spdk/gpt_spec.h 00:01:46.355 TEST_HEADER include/spdk/histogram_data.h 00:01:46.355 TEST_HEADER include/spdk/hexlify.h 00:01:46.355 TEST_HEADER include/spdk/idxd.h 00:01:46.355 CC app/spdk_dd/spdk_dd.o 00:01:46.355 TEST_HEADER include/spdk/idxd_spec.h 00:01:46.355 TEST_HEADER include/spdk/init.h 00:01:46.355 TEST_HEADER include/spdk/ioat.h 00:01:46.355 TEST_HEADER include/spdk/ioat_spec.h 00:01:46.355 TEST_HEADER include/spdk/iscsi_spec.h 00:01:46.355 TEST_HEADER include/spdk/json.h 00:01:46.355 TEST_HEADER include/spdk/jsonrpc.h 00:01:46.355 TEST_HEADER include/spdk/keyring.h 00:01:46.355 TEST_HEADER include/spdk/keyring_module.h 00:01:46.355 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:46.355 TEST_HEADER include/spdk/likely.h 00:01:46.355 TEST_HEADER include/spdk/log.h 00:01:46.355 TEST_HEADER include/spdk/lvol.h 00:01:46.355 CC app/nvmf_tgt/nvmf_main.o 00:01:46.355 TEST_HEADER include/spdk/memory.h 00:01:46.355 TEST_HEADER include/spdk/mmio.h 00:01:46.355 TEST_HEADER include/spdk/nbd.h 00:01:46.355 TEST_HEADER include/spdk/notify.h 00:01:46.355 TEST_HEADER include/spdk/nvme.h 00:01:46.355 TEST_HEADER include/spdk/nvme_intel.h 00:01:46.355 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:46.355 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:46.355 TEST_HEADER include/spdk/nvme_spec.h 00:01:46.355 TEST_HEADER include/spdk/nvme_zns.h 00:01:46.355 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:46.355 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:46.355 CC app/iscsi_tgt/iscsi_tgt.o 00:01:46.355 TEST_HEADER include/spdk/nvmf.h 00:01:46.355 TEST_HEADER include/spdk/nvmf_spec.h 00:01:46.355 TEST_HEADER include/spdk/nvmf_transport.h 00:01:46.355 TEST_HEADER include/spdk/opal.h 00:01:46.355 CC app/vhost/vhost.o 00:01:46.355 TEST_HEADER include/spdk/opal_spec.h 00:01:46.355 TEST_HEADER include/spdk/pci_ids.h 00:01:46.355 TEST_HEADER include/spdk/pipe.h 00:01:46.355 TEST_HEADER include/spdk/queue.h 00:01:46.355 TEST_HEADER include/spdk/reduce.h 00:01:46.355 TEST_HEADER include/spdk/rpc.h 00:01:46.355 TEST_HEADER include/spdk/scheduler.h 00:01:46.355 TEST_HEADER include/spdk/scsi.h 00:01:46.355 TEST_HEADER include/spdk/scsi_spec.h 00:01:46.355 TEST_HEADER include/spdk/sock.h 00:01:46.355 TEST_HEADER include/spdk/stdinc.h 00:01:46.355 TEST_HEADER include/spdk/string.h 00:01:46.355 CC app/spdk_tgt/spdk_tgt.o 00:01:46.355 TEST_HEADER include/spdk/thread.h 00:01:46.355 TEST_HEADER include/spdk/trace.h 00:01:46.355 TEST_HEADER include/spdk/trace_parser.h 00:01:46.355 TEST_HEADER include/spdk/tree.h 00:01:46.355 TEST_HEADER include/spdk/ublk.h 00:01:46.355 TEST_HEADER include/spdk/util.h 00:01:46.355 TEST_HEADER include/spdk/uuid.h 00:01:46.355 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:46.355 TEST_HEADER include/spdk/version.h 00:01:46.355 TEST_HEADER include/spdk/vhost.h 00:01:46.355 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:46.355 TEST_HEADER include/spdk/vmd.h 00:01:46.355 TEST_HEADER include/spdk/xor.h 00:01:46.355 TEST_HEADER include/spdk/zipf.h 00:01:46.355 CXX test/cpp_headers/accel.o 00:01:46.355 CXX test/cpp_headers/accel_module.o 00:01:46.355 CXX test/cpp_headers/assert.o 00:01:46.355 CXX test/cpp_headers/barrier.o 00:01:46.355 CXX test/cpp_headers/base64.o 00:01:46.355 CXX test/cpp_headers/bdev.o 00:01:46.355 CXX test/cpp_headers/bdev_zone.o 00:01:46.355 CXX test/cpp_headers/bdev_module.o 00:01:46.355 CXX test/cpp_headers/bit_array.o 00:01:46.355 CXX test/cpp_headers/bit_pool.o 00:01:46.355 CXX test/cpp_headers/blob_bdev.o 00:01:46.355 CXX test/cpp_headers/blobfs_bdev.o 00:01:46.355 CXX test/cpp_headers/blobfs.o 00:01:46.355 CXX test/cpp_headers/blob.o 00:01:46.355 CXX test/cpp_headers/cpuset.o 00:01:46.355 CXX test/cpp_headers/conf.o 00:01:46.355 CXX test/cpp_headers/config.o 00:01:46.355 CXX test/cpp_headers/crc16.o 00:01:46.355 CXX test/cpp_headers/crc32.o 00:01:46.355 CXX test/cpp_headers/crc64.o 00:01:46.355 CXX test/cpp_headers/dif.o 00:01:46.355 CXX test/cpp_headers/endian.o 00:01:46.355 CXX test/cpp_headers/dma.o 00:01:46.355 CXX test/cpp_headers/env.o 00:01:46.355 CXX test/cpp_headers/env_dpdk.o 00:01:46.355 CXX test/cpp_headers/event.o 00:01:46.355 CXX test/cpp_headers/fd_group.o 00:01:46.355 CXX test/cpp_headers/fd.o 00:01:46.355 CXX test/cpp_headers/ftl.o 00:01:46.355 CXX test/cpp_headers/file.o 00:01:46.355 CXX test/cpp_headers/gpt_spec.o 00:01:46.355 CXX test/cpp_headers/hexlify.o 00:01:46.355 CXX test/cpp_headers/histogram_data.o 00:01:46.355 CXX test/cpp_headers/idxd.o 00:01:46.355 CXX test/cpp_headers/idxd_spec.o 00:01:46.355 CXX test/cpp_headers/init.o 00:01:46.355 CXX test/cpp_headers/ioat.o 00:01:46.355 CC examples/ioat/perf/perf.o 00:01:46.355 CC examples/ioat/verify/verify.o 00:01:46.355 CXX test/cpp_headers/ioat_spec.o 00:01:46.355 CC test/env/pci/pci_ut.o 00:01:46.631 CC examples/idxd/perf/perf.o 00:01:46.631 CC examples/nvme/hello_world/hello_world.o 00:01:46.631 CC test/nvme/reset/reset.o 00:01:46.631 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:46.631 CC test/nvme/aer/aer.o 00:01:46.631 CC examples/accel/perf/accel_perf.o 00:01:46.631 CC examples/vmd/lsvmd/lsvmd.o 00:01:46.631 CC test/nvme/sgl/sgl.o 00:01:46.631 CC test/env/vtophys/vtophys.o 00:01:46.631 CC test/nvme/reserve/reserve.o 00:01:46.631 CC test/nvme/err_injection/err_injection.o 00:01:46.631 CC test/nvme/connect_stress/connect_stress.o 00:01:46.631 CC test/nvme/e2edp/nvme_dp.o 00:01:46.631 CC test/nvme/startup/startup.o 00:01:46.631 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:46.631 CC test/nvme/overhead/overhead.o 00:01:46.631 CC examples/nvme/reconnect/reconnect.o 00:01:46.631 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:46.631 CC test/event/reactor/reactor.o 00:01:46.631 CC examples/nvme/hotplug/hotplug.o 00:01:46.631 CC test/env/memory/memory_ut.o 00:01:46.631 CC app/fio/nvme/fio_plugin.o 00:01:46.631 CC test/event/event_perf/event_perf.o 00:01:46.631 CC test/nvme/fused_ordering/fused_ordering.o 00:01:46.631 CC examples/vmd/led/led.o 00:01:46.631 CC test/app/histogram_perf/histogram_perf.o 00:01:46.631 CC examples/util/zipf/zipf.o 00:01:46.631 CC examples/sock/hello_world/hello_sock.o 00:01:46.631 CC examples/nvme/arbitration/arbitration.o 00:01:46.631 CC examples/blob/cli/blobcli.o 00:01:46.631 CC test/event/reactor_perf/reactor_perf.o 00:01:46.631 CC examples/nvme/abort/abort.o 00:01:46.631 CC test/nvme/cuse/cuse.o 00:01:46.631 CC test/accel/dif/dif.o 00:01:46.631 CC test/nvme/fdp/fdp.o 00:01:46.631 CC test/thread/poller_perf/poller_perf.o 00:01:46.631 CC test/app/jsoncat/jsoncat.o 00:01:46.631 CC test/nvme/compliance/nvme_compliance.o 00:01:46.631 CC test/dma/test_dma/test_dma.o 00:01:46.631 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:46.632 CC test/app/stub/stub.o 00:01:46.632 CC test/nvme/boot_partition/boot_partition.o 00:01:46.632 CC test/nvme/simple_copy/simple_copy.o 00:01:46.632 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:46.632 CC examples/bdev/hello_world/hello_bdev.o 00:01:46.632 CC test/bdev/bdevio/bdevio.o 00:01:46.632 CC examples/blob/hello_world/hello_blob.o 00:01:46.632 CC examples/bdev/bdevperf/bdevperf.o 00:01:46.632 CC examples/thread/thread/thread_ex.o 00:01:46.632 CC test/app/bdev_svc/bdev_svc.o 00:01:46.632 CC test/event/app_repeat/app_repeat.o 00:01:46.632 CC test/blobfs/mkfs/mkfs.o 00:01:46.632 CC examples/nvmf/nvmf/nvmf.o 00:01:46.632 CC app/fio/bdev/fio_plugin.o 00:01:46.632 CC test/event/scheduler/scheduler.o 00:01:46.632 LINK spdk_lspci 00:01:46.908 LINK rpc_client_test 00:01:46.908 CC test/lvol/esnap/esnap.o 00:01:46.908 CC test/env/mem_callbacks/mem_callbacks.o 00:01:46.908 LINK nvmf_tgt 00:01:46.908 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:46.908 LINK interrupt_tgt 00:01:46.908 LINK spdk_nvme_discover 00:01:46.908 LINK vhost 00:01:47.171 LINK spdk_trace_record 00:01:47.171 LINK vtophys 00:01:47.171 LINK reactor 00:01:47.171 LINK histogram_perf 00:01:47.171 LINK jsoncat 00:01:47.171 LINK lsvmd 00:01:47.171 LINK led 00:01:47.171 LINK event_perf 00:01:47.171 LINK poller_perf 00:01:47.171 LINK iscsi_tgt 00:01:47.171 LINK reactor_perf 00:01:47.171 LINK zipf 00:01:47.171 CXX test/cpp_headers/iscsi_spec.o 00:01:47.171 CXX test/cpp_headers/json.o 00:01:47.171 CXX test/cpp_headers/jsonrpc.o 00:01:47.171 LINK startup 00:01:47.171 CXX test/cpp_headers/keyring.o 00:01:47.171 LINK env_dpdk_post_init 00:01:47.171 LINK spdk_tgt 00:01:47.171 LINK connect_stress 00:01:47.171 CXX test/cpp_headers/keyring_module.o 00:01:47.171 CXX test/cpp_headers/likely.o 00:01:47.171 CXX test/cpp_headers/log.o 00:01:47.171 CXX test/cpp_headers/lvol.o 00:01:47.171 CXX test/cpp_headers/memory.o 00:01:47.171 CXX test/cpp_headers/mmio.o 00:01:47.171 CXX test/cpp_headers/nbd.o 00:01:47.171 LINK ioat_perf 00:01:47.171 CXX test/cpp_headers/notify.o 00:01:47.171 LINK app_repeat 00:01:47.171 CXX test/cpp_headers/nvme.o 00:01:47.171 LINK doorbell_aers 00:01:47.171 CXX test/cpp_headers/nvme_intel.o 00:01:47.171 CXX test/cpp_headers/nvme_ocssd.o 00:01:47.171 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:47.171 CXX test/cpp_headers/nvme_spec.o 00:01:47.171 CXX test/cpp_headers/nvme_zns.o 00:01:47.171 LINK stub 00:01:47.171 CXX test/cpp_headers/nvmf_cmd.o 00:01:47.171 LINK fused_ordering 00:01:47.171 LINK boot_partition 00:01:47.171 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:47.171 LINK err_injection 00:01:47.171 LINK cmb_copy 00:01:47.171 CXX test/cpp_headers/nvmf.o 00:01:47.171 LINK pmr_persistence 00:01:47.171 CXX test/cpp_headers/nvmf_spec.o 00:01:47.171 CXX test/cpp_headers/nvmf_transport.o 00:01:47.171 CXX test/cpp_headers/opal.o 00:01:47.171 CXX test/cpp_headers/opal_spec.o 00:01:47.171 CXX test/cpp_headers/pipe.o 00:01:47.171 CXX test/cpp_headers/queue.o 00:01:47.171 CXX test/cpp_headers/pci_ids.o 00:01:47.171 CXX test/cpp_headers/reduce.o 00:01:47.171 CXX test/cpp_headers/rpc.o 00:01:47.171 CXX test/cpp_headers/scheduler.o 00:01:47.171 LINK hello_world 00:01:47.171 LINK verify 00:01:47.171 CXX test/cpp_headers/scsi.o 00:01:47.171 LINK reserve 00:01:47.171 LINK mkfs 00:01:47.171 CXX test/cpp_headers/scsi_spec.o 00:01:47.171 CXX test/cpp_headers/sock.o 00:01:47.171 LINK bdev_svc 00:01:47.171 LINK hello_bdev 00:01:47.171 LINK spdk_dd 00:01:47.171 LINK simple_copy 00:01:47.171 CXX test/cpp_headers/stdinc.o 00:01:47.171 CXX test/cpp_headers/thread.o 00:01:47.171 LINK hotplug 00:01:47.171 CXX test/cpp_headers/string.o 00:01:47.171 CXX test/cpp_headers/trace.o 00:01:47.171 LINK hello_sock 00:01:47.171 LINK reset 00:01:47.171 LINK aer 00:01:47.171 LINK nvme_dp 00:01:47.445 LINK sgl 00:01:47.445 LINK hello_blob 00:01:47.445 LINK overhead 00:01:47.445 LINK thread 00:01:47.445 LINK scheduler 00:01:47.445 CXX test/cpp_headers/trace_parser.o 00:01:47.445 LINK fdp 00:01:47.445 CXX test/cpp_headers/ublk.o 00:01:47.445 CXX test/cpp_headers/tree.o 00:01:47.445 LINK pci_ut 00:01:47.445 LINK nvme_compliance 00:01:47.445 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:47.445 CXX test/cpp_headers/util.o 00:01:47.445 LINK nvmf 00:01:47.445 LINK idxd_perf 00:01:47.445 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:47.445 CXX test/cpp_headers/uuid.o 00:01:47.445 LINK abort 00:01:47.445 LINK arbitration 00:01:47.445 CXX test/cpp_headers/version.o 00:01:47.445 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:47.445 CXX test/cpp_headers/vfio_user_pci.o 00:01:47.445 CXX test/cpp_headers/vfio_user_spec.o 00:01:47.445 LINK reconnect 00:01:47.445 LINK test_dma 00:01:47.445 LINK dif 00:01:47.445 CXX test/cpp_headers/vhost.o 00:01:47.445 CXX test/cpp_headers/vmd.o 00:01:47.445 LINK spdk_trace 00:01:47.445 CXX test/cpp_headers/xor.o 00:01:47.445 CXX test/cpp_headers/zipf.o 00:01:47.703 LINK bdevio 00:01:47.703 LINK accel_perf 00:01:47.703 LINK blobcli 00:01:47.703 LINK nvme_manage 00:01:47.703 LINK spdk_bdev 00:01:47.703 LINK spdk_nvme 00:01:47.703 LINK nvme_fuzz 00:01:47.961 LINK spdk_nvme_perf 00:01:47.961 LINK mem_callbacks 00:01:47.961 LINK spdk_top 00:01:47.961 LINK vhost_fuzz 00:01:47.961 LINK spdk_nvme_identify 00:01:47.961 LINK bdevperf 00:01:48.219 LINK memory_ut 00:01:48.219 LINK cuse 00:01:48.786 LINK iscsi_fuzz 00:01:50.689 LINK esnap 00:01:50.689 00:01:50.689 real 0m47.976s 00:01:50.689 user 6m34.634s 00:01:50.689 sys 4m20.153s 00:01:50.689 21:17:13 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:50.689 21:17:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.689 ************************************ 00:01:50.689 END TEST make 00:01:50.689 ************************************ 00:01:50.947 21:17:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:50.947 21:17:13 -- pm/common@30 -- $ signal_monitor_resources TERM 00:01:50.947 21:17:13 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:01:50.947 21:17:13 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:50.947 21:17:13 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:50.947 21:17:13 -- pm/common@45 -- $ pid=2560351 00:01:50.947 21:17:13 -- pm/common@52 -- $ sudo kill -TERM 2560351 00:01:50.947 21:17:13 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:50.947 21:17:13 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:50.947 21:17:13 -- pm/common@45 -- $ pid=2560352 00:01:50.947 21:17:13 -- pm/common@52 -- $ sudo kill -TERM 2560352 00:01:50.947 21:17:13 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:50.947 21:17:13 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:50.947 21:17:13 -- pm/common@45 -- $ pid=2560353 00:01:50.947 21:17:13 -- pm/common@52 -- $ sudo kill -TERM 2560353 00:01:50.947 21:17:13 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:50.947 21:17:13 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:50.947 21:17:13 -- pm/common@45 -- $ pid=2560354 00:01:50.947 21:17:13 -- pm/common@52 -- $ sudo kill -TERM 2560354 00:01:51.205 21:17:13 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:51.205 21:17:13 -- nvmf/common.sh@7 -- # uname -s 00:01:51.205 21:17:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:51.205 21:17:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:51.205 21:17:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:51.205 21:17:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:51.205 21:17:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:51.205 21:17:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:51.205 21:17:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:51.205 21:17:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:51.205 21:17:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:51.205 21:17:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:51.206 21:17:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:01:51.206 21:17:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:01:51.206 21:17:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:51.206 21:17:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:51.206 21:17:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:51.206 21:17:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:51.206 21:17:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:51.206 21:17:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:51.206 21:17:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:51.206 21:17:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:51.206 21:17:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.206 21:17:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.206 21:17:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.206 21:17:13 -- paths/export.sh@5 -- # export PATH 00:01:51.206 21:17:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.206 21:17:13 -- nvmf/common.sh@47 -- # : 0 00:01:51.206 21:17:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:51.206 21:17:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:51.206 21:17:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:51.206 21:17:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:51.206 21:17:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:51.206 21:17:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:51.206 21:17:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:51.206 21:17:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:51.206 21:17:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:51.206 21:17:13 -- spdk/autotest.sh@32 -- # uname -s 00:01:51.206 21:17:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:51.206 21:17:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:51.206 21:17:13 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:51.206 21:17:13 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:51.206 21:17:13 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:51.206 21:17:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:51.206 21:17:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:51.206 21:17:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:51.206 21:17:13 -- spdk/autotest.sh@48 -- # udevadm_pid=2620350 00:01:51.206 21:17:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:51.206 21:17:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:51.206 21:17:13 -- pm/common@17 -- # local monitor 00:01:51.206 21:17:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.206 21:17:13 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2620351 00:01:51.206 21:17:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.206 21:17:13 -- pm/common@21 -- # date +%s 00:01:51.206 21:17:13 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2620354 00:01:51.206 21:17:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.206 21:17:13 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2620358 00:01:51.206 21:17:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.206 21:17:13 -- pm/common@21 -- # date +%s 00:01:51.206 21:17:13 -- pm/common@21 -- # date +%s 00:01:51.206 21:17:13 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2620360 00:01:51.206 21:17:13 -- pm/common@26 -- # sleep 1 00:01:51.206 21:17:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713986233 00:01:51.206 21:17:13 -- pm/common@21 -- # date +%s 00:01:51.206 21:17:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713986233 00:01:51.206 21:17:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713986233 00:01:51.206 21:17:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713986233 00:01:51.206 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713986233_collect-vmstat.pm.log 00:01:51.206 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713986233_collect-cpu-load.pm.log 00:01:51.206 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713986233_collect-bmc-pm.bmc.pm.log 00:01:51.206 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713986233_collect-cpu-temp.pm.log 00:01:52.143 21:17:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:52.143 21:17:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:52.143 21:17:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:01:52.143 21:17:14 -- common/autotest_common.sh@10 -- # set +x 00:01:52.143 21:17:14 -- spdk/autotest.sh@59 -- # create_test_list 00:01:52.143 21:17:14 -- common/autotest_common.sh@734 -- # xtrace_disable 00:01:52.143 21:17:14 -- common/autotest_common.sh@10 -- # set +x 00:01:52.143 21:17:14 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:52.143 21:17:14 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:52.143 21:17:14 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:52.143 21:17:14 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:52.143 21:17:14 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:52.143 21:17:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:52.143 21:17:14 -- common/autotest_common.sh@1441 -- # uname 00:01:52.143 21:17:15 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:01:52.143 21:17:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:52.143 21:17:15 -- common/autotest_common.sh@1461 -- # uname 00:01:52.143 21:17:15 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:01:52.143 21:17:15 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:52.143 21:17:15 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:52.143 21:17:15 -- spdk/autotest.sh@72 -- # hash lcov 00:01:52.143 21:17:15 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:52.143 21:17:15 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:52.143 --rc lcov_branch_coverage=1 00:01:52.143 --rc lcov_function_coverage=1 00:01:52.143 --rc genhtml_branch_coverage=1 00:01:52.143 --rc genhtml_function_coverage=1 00:01:52.143 --rc genhtml_legend=1 00:01:52.143 --rc geninfo_all_blocks=1 00:01:52.143 ' 00:01:52.143 21:17:15 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:52.143 --rc lcov_branch_coverage=1 00:01:52.143 --rc lcov_function_coverage=1 00:01:52.143 --rc genhtml_branch_coverage=1 00:01:52.143 --rc genhtml_function_coverage=1 00:01:52.143 --rc genhtml_legend=1 00:01:52.143 --rc geninfo_all_blocks=1 00:01:52.143 ' 00:01:52.143 21:17:15 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:52.143 --rc lcov_branch_coverage=1 00:01:52.143 --rc lcov_function_coverage=1 00:01:52.143 --rc genhtml_branch_coverage=1 00:01:52.143 --rc genhtml_function_coverage=1 00:01:52.143 --rc genhtml_legend=1 00:01:52.143 --rc geninfo_all_blocks=1 00:01:52.143 --no-external' 00:01:52.143 21:17:15 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:52.143 --rc lcov_branch_coverage=1 00:01:52.143 --rc lcov_function_coverage=1 00:01:52.143 --rc genhtml_branch_coverage=1 00:01:52.143 --rc genhtml_function_coverage=1 00:01:52.143 --rc genhtml_legend=1 00:01:52.143 --rc geninfo_all_blocks=1 00:01:52.143 --no-external' 00:01:52.143 21:17:15 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:52.402 lcov: LCOV version 1.14 00:01:52.402 21:17:15 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:01:58.983 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:01:58.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:01:58.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:01:58.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:01:58.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:01:58.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:01:58.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:01:58.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:01:58.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:01:58.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:01:58.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:01:58.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:01:58.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:01:58.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:01:58.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:01:58.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:01:58.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:01:58.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:01:58.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:01:58.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:01:58.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:01:58.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:01:58.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:01:58.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:01:58.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:01:58.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:01:58.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:01:58.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:01:58.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:01:58.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:01:58.984 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:01:59.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:01:59.242 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:01:59.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:01:59.242 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:01:59.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:01:59.242 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:01:59.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:01:59.242 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:01:59.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:01:59.242 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:01:59.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:01:59.242 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:02.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:02.528 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:10.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:10.641 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:10.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:10.641 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:10.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:10.641 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:17.209 21:17:39 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:17.209 21:17:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:17.209 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:02:17.209 21:17:39 -- spdk/autotest.sh@91 -- # rm -f 00:02:17.209 21:17:39 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:19.743 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:19.743 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:19.743 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:19.743 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:19.743 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:19.743 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:19.743 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:19.743 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:19.743 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:19.743 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:19.743 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:19.743 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:19.743 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:19.743 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:19.743 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:19.743 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:19.743 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:19.743 21:17:42 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:19.743 21:17:42 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:19.743 21:17:42 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:19.743 21:17:42 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:19.743 21:17:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:19.743 21:17:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:19.743 21:17:42 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:19.743 21:17:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:19.743 21:17:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:19.743 21:17:42 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:19.743 21:17:42 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:19.743 21:17:42 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:19.744 21:17:42 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:19.744 21:17:42 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:19.744 21:17:42 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:19.744 No valid GPT data, bailing 00:02:19.744 21:17:42 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:19.744 21:17:42 -- scripts/common.sh@391 -- # pt= 00:02:19.744 21:17:42 -- scripts/common.sh@392 -- # return 1 00:02:19.744 21:17:42 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:19.744 1+0 records in 00:02:19.744 1+0 records out 00:02:19.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504621 s, 208 MB/s 00:02:19.744 21:17:42 -- spdk/autotest.sh@118 -- # sync 00:02:19.744 21:17:42 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:19.744 21:17:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:19.744 21:17:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:26.327 21:17:48 -- spdk/autotest.sh@124 -- # uname -s 00:02:26.327 21:17:48 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:26.327 21:17:48 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:26.327 21:17:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:26.327 21:17:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:26.327 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:02:26.327 ************************************ 00:02:26.327 START TEST setup.sh 00:02:26.327 ************************************ 00:02:26.327 21:17:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:26.327 * Looking for test storage... 00:02:26.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:26.327 21:17:48 -- setup/test-setup.sh@10 -- # uname -s 00:02:26.327 21:17:48 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:26.327 21:17:48 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:26.327 21:17:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:26.327 21:17:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:26.327 21:17:48 -- common/autotest_common.sh@10 -- # set +x 00:02:26.327 ************************************ 00:02:26.327 START TEST acl 00:02:26.327 ************************************ 00:02:26.327 21:17:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:26.327 * Looking for test storage... 00:02:26.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:26.327 21:17:49 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:26.327 21:17:49 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:26.327 21:17:49 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:26.327 21:17:49 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:26.327 21:17:49 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:26.327 21:17:49 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:26.327 21:17:49 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:26.327 21:17:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:26.327 21:17:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:26.327 21:17:49 -- setup/acl.sh@12 -- # devs=() 00:02:26.327 21:17:49 -- setup/acl.sh@12 -- # declare -a devs 00:02:26.327 21:17:49 -- setup/acl.sh@13 -- # drivers=() 00:02:26.327 21:17:49 -- setup/acl.sh@13 -- # declare -A drivers 00:02:26.327 21:17:49 -- setup/acl.sh@51 -- # setup reset 00:02:26.327 21:17:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:26.327 21:17:49 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:30.586 21:17:52 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:30.586 21:17:52 -- setup/acl.sh@16 -- # local dev driver 00:02:30.586 21:17:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.586 21:17:52 -- setup/acl.sh@15 -- # setup output status 00:02:30.586 21:17:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:30.586 21:17:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:33.871 Hugepages 00:02:33.871 node hugesize free / total 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 00:02:33.871 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # continue 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:33.871 21:17:56 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:33.871 21:17:56 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:33.871 21:17:56 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:33.871 21:17:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.871 21:17:56 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:33.871 21:17:56 -- setup/acl.sh@54 -- # run_test denied denied 00:02:33.871 21:17:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:33.871 21:17:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:33.871 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:02:33.871 ************************************ 00:02:33.871 START TEST denied 00:02:33.871 ************************************ 00:02:33.871 21:17:56 -- common/autotest_common.sh@1111 -- # denied 00:02:33.871 21:17:56 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:33.871 21:17:56 -- setup/acl.sh@38 -- # setup output config 00:02:33.871 21:17:56 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:33.871 21:17:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:33.871 21:17:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:37.168 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:02:37.168 21:17:59 -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:02:37.168 21:17:59 -- setup/acl.sh@28 -- # local dev driver 00:02:37.169 21:17:59 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:37.169 21:17:59 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:37.169 21:17:59 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:37.169 21:17:59 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:37.169 21:17:59 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:37.169 21:17:59 -- setup/acl.sh@41 -- # setup reset 00:02:37.169 21:17:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:37.169 21:17:59 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:41.360 00:02:41.360 real 0m7.814s 00:02:41.360 user 0m2.365s 00:02:41.360 sys 0m4.747s 00:02:41.360 21:18:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:41.360 21:18:04 -- common/autotest_common.sh@10 -- # set +x 00:02:41.360 ************************************ 00:02:41.360 END TEST denied 00:02:41.360 ************************************ 00:02:41.619 21:18:04 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:41.619 21:18:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:41.619 21:18:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:41.619 21:18:04 -- common/autotest_common.sh@10 -- # set +x 00:02:41.619 ************************************ 00:02:41.619 START TEST allowed 00:02:41.619 ************************************ 00:02:41.619 21:18:04 -- common/autotest_common.sh@1111 -- # allowed 00:02:41.619 21:18:04 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:02:41.619 21:18:04 -- setup/acl.sh@45 -- # setup output config 00:02:41.619 21:18:04 -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:02:41.619 21:18:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.619 21:18:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:46.889 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:46.889 21:18:09 -- setup/acl.sh@47 -- # verify 00:02:46.889 21:18:09 -- setup/acl.sh@28 -- # local dev driver 00:02:46.889 21:18:09 -- setup/acl.sh@48 -- # setup reset 00:02:46.889 21:18:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:46.889 21:18:09 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:51.082 00:02:51.082 real 0m8.727s 00:02:51.082 user 0m2.509s 00:02:51.082 sys 0m4.893s 00:02:51.082 21:18:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:51.083 21:18:13 -- common/autotest_common.sh@10 -- # set +x 00:02:51.083 ************************************ 00:02:51.083 END TEST allowed 00:02:51.083 ************************************ 00:02:51.083 00:02:51.083 real 0m24.211s 00:02:51.083 user 0m7.651s 00:02:51.083 sys 0m14.777s 00:02:51.083 21:18:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:51.083 21:18:13 -- common/autotest_common.sh@10 -- # set +x 00:02:51.083 ************************************ 00:02:51.083 END TEST acl 00:02:51.083 ************************************ 00:02:51.083 21:18:13 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:51.083 21:18:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:51.083 21:18:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:51.083 21:18:13 -- common/autotest_common.sh@10 -- # set +x 00:02:51.083 ************************************ 00:02:51.083 START TEST hugepages 00:02:51.083 ************************************ 00:02:51.083 21:18:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:51.083 * Looking for test storage... 00:02:51.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:51.083 21:18:13 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:51.083 21:18:13 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:51.083 21:18:13 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:51.083 21:18:13 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:51.083 21:18:13 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:51.083 21:18:13 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:51.083 21:18:13 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:51.083 21:18:13 -- setup/common.sh@18 -- # local node= 00:02:51.083 21:18:13 -- setup/common.sh@19 -- # local var val 00:02:51.083 21:18:13 -- setup/common.sh@20 -- # local mem_f mem 00:02:51.083 21:18:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.083 21:18:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.083 21:18:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.083 21:18:13 -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.083 21:18:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.083 21:18:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 37767284 kB' 'MemAvailable: 42400700 kB' 'Buffers: 2696 kB' 'Cached: 14090316 kB' 'SwapCached: 0 kB' 'Active: 11016428 kB' 'Inactive: 3664300 kB' 'Active(anon): 9905272 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 591176 kB' 'Mapped: 215624 kB' 'Shmem: 9317556 kB' 'KReclaimable: 505668 kB' 'Slab: 1160196 kB' 'SReclaimable: 505668 kB' 'SUnreclaim: 654528 kB' 'KernelStack: 22176 kB' 'PageTables: 9476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439060 kB' 'Committed_AS: 11310880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217052 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.083 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.083 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.084 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.084 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # continue 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # IFS=': ' 00:02:51.085 21:18:13 -- setup/common.sh@31 -- # read -r var val _ 00:02:51.085 21:18:13 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:51.085 21:18:13 -- setup/common.sh@33 -- # echo 2048 00:02:51.085 21:18:13 -- setup/common.sh@33 -- # return 0 00:02:51.085 21:18:13 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:51.085 21:18:13 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:51.085 21:18:13 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:51.085 21:18:13 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:51.085 21:18:13 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:51.085 21:18:13 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:51.085 21:18:13 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:51.085 21:18:13 -- setup/hugepages.sh@207 -- # get_nodes 00:02:51.085 21:18:13 -- setup/hugepages.sh@27 -- # local node 00:02:51.085 21:18:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.085 21:18:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:51.085 21:18:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.085 21:18:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:51.085 21:18:13 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:51.085 21:18:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:51.085 21:18:13 -- setup/hugepages.sh@208 -- # clear_hp 00:02:51.085 21:18:13 -- setup/hugepages.sh@37 -- # local node hp 00:02:51.085 21:18:13 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:51.085 21:18:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:51.085 21:18:13 -- setup/hugepages.sh@41 -- # echo 0 00:02:51.085 21:18:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:51.085 21:18:13 -- setup/hugepages.sh@41 -- # echo 0 00:02:51.085 21:18:13 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:51.085 21:18:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:51.085 21:18:13 -- setup/hugepages.sh@41 -- # echo 0 00:02:51.085 21:18:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:51.085 21:18:13 -- setup/hugepages.sh@41 -- # echo 0 00:02:51.085 21:18:13 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:51.085 21:18:13 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:51.085 21:18:13 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:51.085 21:18:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:51.085 21:18:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:51.085 21:18:13 -- common/autotest_common.sh@10 -- # set +x 00:02:51.085 ************************************ 00:02:51.085 START TEST default_setup 00:02:51.085 ************************************ 00:02:51.085 21:18:13 -- common/autotest_common.sh@1111 -- # default_setup 00:02:51.085 21:18:13 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:51.086 21:18:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:51.086 21:18:13 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:51.086 21:18:13 -- setup/hugepages.sh@51 -- # shift 00:02:51.086 21:18:13 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:51.086 21:18:13 -- setup/hugepages.sh@52 -- # local node_ids 00:02:51.086 21:18:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:51.086 21:18:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:51.086 21:18:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:51.086 21:18:13 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:51.086 21:18:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:51.086 21:18:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:51.086 21:18:13 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:51.086 21:18:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:51.086 21:18:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:51.086 21:18:13 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:51.086 21:18:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:51.086 21:18:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:51.086 21:18:13 -- setup/hugepages.sh@73 -- # return 0 00:02:51.086 21:18:13 -- setup/hugepages.sh@137 -- # setup output 00:02:51.086 21:18:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.086 21:18:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:53.622 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:53.622 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:53.622 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:53.622 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:53.622 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:53.622 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:53.622 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:53.884 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:53.884 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:53.884 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:53.884 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:53.884 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:53.884 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:53.884 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:53.884 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:53.884 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:55.804 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:55.804 21:18:18 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:55.804 21:18:18 -- setup/hugepages.sh@89 -- # local node 00:02:55.804 21:18:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:55.804 21:18:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:55.804 21:18:18 -- setup/hugepages.sh@92 -- # local surp 00:02:55.804 21:18:18 -- setup/hugepages.sh@93 -- # local resv 00:02:55.804 21:18:18 -- setup/hugepages.sh@94 -- # local anon 00:02:55.804 21:18:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:55.804 21:18:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:55.804 21:18:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:55.804 21:18:18 -- setup/common.sh@18 -- # local node= 00:02:55.804 21:18:18 -- setup/common.sh@19 -- # local var val 00:02:55.804 21:18:18 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.804 21:18:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.804 21:18:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.804 21:18:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.804 21:18:18 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.804 21:18:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.804 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.804 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39959056 kB' 'MemAvailable: 44592464 kB' 'Buffers: 2696 kB' 'Cached: 14090440 kB' 'SwapCached: 0 kB' 'Active: 11029856 kB' 'Inactive: 3664300 kB' 'Active(anon): 9918700 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603952 kB' 'Mapped: 215500 kB' 'Shmem: 9317680 kB' 'KReclaimable: 505660 kB' 'Slab: 1158272 kB' 'SReclaimable: 505660 kB' 'SUnreclaim: 652612 kB' 'KernelStack: 22224 kB' 'PageTables: 9824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11323900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217276 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.805 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.805 21:18:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.806 21:18:18 -- setup/common.sh@33 -- # echo 0 00:02:55.806 21:18:18 -- setup/common.sh@33 -- # return 0 00:02:55.806 21:18:18 -- setup/hugepages.sh@97 -- # anon=0 00:02:55.806 21:18:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:55.806 21:18:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.806 21:18:18 -- setup/common.sh@18 -- # local node= 00:02:55.806 21:18:18 -- setup/common.sh@19 -- # local var val 00:02:55.806 21:18:18 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.806 21:18:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.806 21:18:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.806 21:18:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.806 21:18:18 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.806 21:18:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39958244 kB' 'MemAvailable: 44591652 kB' 'Buffers: 2696 kB' 'Cached: 14090452 kB' 'SwapCached: 0 kB' 'Active: 11028800 kB' 'Inactive: 3664300 kB' 'Active(anon): 9917644 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603404 kB' 'Mapped: 215316 kB' 'Shmem: 9317692 kB' 'KReclaimable: 505660 kB' 'Slab: 1158288 kB' 'SReclaimable: 505660 kB' 'SUnreclaim: 652628 kB' 'KernelStack: 22352 kB' 'PageTables: 9596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11324028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217276 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.806 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.806 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.807 21:18:18 -- setup/common.sh@33 -- # echo 0 00:02:55.807 21:18:18 -- setup/common.sh@33 -- # return 0 00:02:55.807 21:18:18 -- setup/hugepages.sh@99 -- # surp=0 00:02:55.807 21:18:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:55.807 21:18:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:55.807 21:18:18 -- setup/common.sh@18 -- # local node= 00:02:55.807 21:18:18 -- setup/common.sh@19 -- # local var val 00:02:55.807 21:18:18 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.807 21:18:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.807 21:18:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.807 21:18:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.807 21:18:18 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.807 21:18:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39957240 kB' 'MemAvailable: 44590648 kB' 'Buffers: 2696 kB' 'Cached: 14090456 kB' 'SwapCached: 0 kB' 'Active: 11029148 kB' 'Inactive: 3664300 kB' 'Active(anon): 9917992 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603816 kB' 'Mapped: 215316 kB' 'Shmem: 9317696 kB' 'KReclaimable: 505660 kB' 'Slab: 1158268 kB' 'SReclaimable: 505660 kB' 'SUnreclaim: 652608 kB' 'KernelStack: 22384 kB' 'PageTables: 10400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11323928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217196 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.807 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.807 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.808 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.808 21:18:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.808 21:18:18 -- setup/common.sh@33 -- # echo 0 00:02:55.808 21:18:18 -- setup/common.sh@33 -- # return 0 00:02:55.808 21:18:18 -- setup/hugepages.sh@100 -- # resv=0 00:02:55.808 21:18:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:55.808 nr_hugepages=1024 00:02:55.808 21:18:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:55.808 resv_hugepages=0 00:02:55.808 21:18:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:55.808 surplus_hugepages=0 00:02:55.808 21:18:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:55.808 anon_hugepages=0 00:02:55.808 21:18:18 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:55.809 21:18:18 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:55.809 21:18:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:55.809 21:18:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:55.809 21:18:18 -- setup/common.sh@18 -- # local node= 00:02:55.809 21:18:18 -- setup/common.sh@19 -- # local var val 00:02:55.809 21:18:18 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.809 21:18:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.809 21:18:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.809 21:18:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.809 21:18:18 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.809 21:18:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39956776 kB' 'MemAvailable: 44590184 kB' 'Buffers: 2696 kB' 'Cached: 14090468 kB' 'SwapCached: 0 kB' 'Active: 11028880 kB' 'Inactive: 3664300 kB' 'Active(anon): 9917724 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603428 kB' 'Mapped: 215316 kB' 'Shmem: 9317708 kB' 'KReclaimable: 505660 kB' 'Slab: 1158268 kB' 'SReclaimable: 505660 kB' 'SUnreclaim: 652608 kB' 'KernelStack: 22192 kB' 'PageTables: 9820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11323940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217260 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.809 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.809 21:18:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.810 21:18:18 -- setup/common.sh@33 -- # echo 1024 00:02:55.810 21:18:18 -- setup/common.sh@33 -- # return 0 00:02:55.810 21:18:18 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:55.810 21:18:18 -- setup/hugepages.sh@112 -- # get_nodes 00:02:55.810 21:18:18 -- setup/hugepages.sh@27 -- # local node 00:02:55.810 21:18:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.810 21:18:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:55.810 21:18:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.810 21:18:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:55.810 21:18:18 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:55.810 21:18:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:55.810 21:18:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:55.810 21:18:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:55.810 21:18:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:55.810 21:18:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.810 21:18:18 -- setup/common.sh@18 -- # local node=0 00:02:55.810 21:18:18 -- setup/common.sh@19 -- # local var val 00:02:55.810 21:18:18 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.810 21:18:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.810 21:18:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:55.810 21:18:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:55.810 21:18:18 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.810 21:18:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19178840 kB' 'MemUsed: 13460300 kB' 'SwapCached: 0 kB' 'Active: 6746504 kB' 'Inactive: 3290148 kB' 'Active(anon): 6212372 kB' 'Inactive(anon): 0 kB' 'Active(file): 534132 kB' 'Inactive(file): 3290148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9644580 kB' 'Mapped: 130264 kB' 'AnonPages: 395260 kB' 'Shmem: 5820300 kB' 'KernelStack: 12088 kB' 'PageTables: 5176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334596 kB' 'Slab: 651460 kB' 'SReclaimable: 334596 kB' 'SUnreclaim: 316864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.810 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.810 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # continue 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.811 21:18:18 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.811 21:18:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.811 21:18:18 -- setup/common.sh@33 -- # echo 0 00:02:55.811 21:18:18 -- setup/common.sh@33 -- # return 0 00:02:55.811 21:18:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:55.811 21:18:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:55.811 21:18:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:55.811 21:18:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:55.811 21:18:18 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:55.811 node0=1024 expecting 1024 00:02:55.811 21:18:18 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:55.811 00:02:55.811 real 0m4.728s 00:02:55.811 user 0m1.056s 00:02:55.811 sys 0m2.060s 00:02:55.811 21:18:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:55.811 21:18:18 -- common/autotest_common.sh@10 -- # set +x 00:02:55.811 ************************************ 00:02:55.811 END TEST default_setup 00:02:55.811 ************************************ 00:02:55.811 21:18:18 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:55.811 21:18:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:55.811 21:18:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:55.811 21:18:18 -- common/autotest_common.sh@10 -- # set +x 00:02:55.811 ************************************ 00:02:55.811 START TEST per_node_1G_alloc 00:02:55.811 ************************************ 00:02:55.811 21:18:18 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:02:55.811 21:18:18 -- setup/hugepages.sh@143 -- # local IFS=, 00:02:55.811 21:18:18 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:55.811 21:18:18 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:55.811 21:18:18 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:55.811 21:18:18 -- setup/hugepages.sh@51 -- # shift 00:02:55.811 21:18:18 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:55.811 21:18:18 -- setup/hugepages.sh@52 -- # local node_ids 00:02:55.811 21:18:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:55.811 21:18:18 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:55.811 21:18:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:55.811 21:18:18 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:55.811 21:18:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:55.811 21:18:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:55.811 21:18:18 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:55.811 21:18:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:55.811 21:18:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:55.811 21:18:18 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:55.811 21:18:18 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:55.811 21:18:18 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:55.811 21:18:18 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:55.811 21:18:18 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:55.811 21:18:18 -- setup/hugepages.sh@73 -- # return 0 00:02:55.811 21:18:18 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:55.811 21:18:18 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:55.811 21:18:18 -- setup/hugepages.sh@146 -- # setup output 00:02:55.811 21:18:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.811 21:18:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:59.097 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:59.097 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:59.097 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:59.097 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:59.097 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:59.097 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:59.097 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:59.097 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:59.097 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:59.097 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:59.097 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:59.097 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:59.097 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:59.097 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:59.097 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:59.097 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:59.097 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:59.361 21:18:22 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:59.361 21:18:22 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:59.361 21:18:22 -- setup/hugepages.sh@89 -- # local node 00:02:59.361 21:18:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:59.361 21:18:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:59.361 21:18:22 -- setup/hugepages.sh@92 -- # local surp 00:02:59.361 21:18:22 -- setup/hugepages.sh@93 -- # local resv 00:02:59.361 21:18:22 -- setup/hugepages.sh@94 -- # local anon 00:02:59.361 21:18:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:59.361 21:18:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:59.361 21:18:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:59.361 21:18:22 -- setup/common.sh@18 -- # local node= 00:02:59.361 21:18:22 -- setup/common.sh@19 -- # local var val 00:02:59.361 21:18:22 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.361 21:18:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.361 21:18:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.361 21:18:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.361 21:18:22 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.361 21:18:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.361 21:18:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39951916 kB' 'MemAvailable: 44585196 kB' 'Buffers: 2696 kB' 'Cached: 14090560 kB' 'SwapCached: 0 kB' 'Active: 11026304 kB' 'Inactive: 3664300 kB' 'Active(anon): 9915148 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 600568 kB' 'Mapped: 214096 kB' 'Shmem: 9317800 kB' 'KReclaimable: 505532 kB' 'Slab: 1159216 kB' 'SReclaimable: 505532 kB' 'SUnreclaim: 653684 kB' 'KernelStack: 22112 kB' 'PageTables: 9364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11310512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217228 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.361 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.361 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.362 21:18:22 -- setup/common.sh@33 -- # echo 0 00:02:59.362 21:18:22 -- setup/common.sh@33 -- # return 0 00:02:59.362 21:18:22 -- setup/hugepages.sh@97 -- # anon=0 00:02:59.362 21:18:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:59.362 21:18:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.362 21:18:22 -- setup/common.sh@18 -- # local node= 00:02:59.362 21:18:22 -- setup/common.sh@19 -- # local var val 00:02:59.362 21:18:22 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.362 21:18:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.362 21:18:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.362 21:18:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.362 21:18:22 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.362 21:18:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39953784 kB' 'MemAvailable: 44587064 kB' 'Buffers: 2696 kB' 'Cached: 14090560 kB' 'SwapCached: 0 kB' 'Active: 11026008 kB' 'Inactive: 3664300 kB' 'Active(anon): 9914852 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 600304 kB' 'Mapped: 214152 kB' 'Shmem: 9317800 kB' 'KReclaimable: 505532 kB' 'Slab: 1159256 kB' 'SReclaimable: 505532 kB' 'SUnreclaim: 653724 kB' 'KernelStack: 22096 kB' 'PageTables: 9340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11310492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217180 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.362 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.362 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.363 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.363 21:18:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.363 21:18:22 -- setup/common.sh@33 -- # echo 0 00:02:59.363 21:18:22 -- setup/common.sh@33 -- # return 0 00:02:59.363 21:18:22 -- setup/hugepages.sh@99 -- # surp=0 00:02:59.363 21:18:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:59.363 21:18:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:59.363 21:18:22 -- setup/common.sh@18 -- # local node= 00:02:59.363 21:18:22 -- setup/common.sh@19 -- # local var val 00:02:59.363 21:18:22 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.364 21:18:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.364 21:18:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.364 21:18:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.364 21:18:22 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.364 21:18:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39953536 kB' 'MemAvailable: 44586816 kB' 'Buffers: 2696 kB' 'Cached: 14090576 kB' 'SwapCached: 0 kB' 'Active: 11025636 kB' 'Inactive: 3664300 kB' 'Active(anon): 9914480 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599840 kB' 'Mapped: 214092 kB' 'Shmem: 9317816 kB' 'KReclaimable: 505532 kB' 'Slab: 1159256 kB' 'SReclaimable: 505532 kB' 'SUnreclaim: 653724 kB' 'KernelStack: 22064 kB' 'PageTables: 9184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11310500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217164 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.364 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.364 21:18:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.365 21:18:22 -- setup/common.sh@33 -- # echo 0 00:02:59.365 21:18:22 -- setup/common.sh@33 -- # return 0 00:02:59.365 21:18:22 -- setup/hugepages.sh@100 -- # resv=0 00:02:59.365 21:18:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:59.365 nr_hugepages=1024 00:02:59.365 21:18:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:59.365 resv_hugepages=0 00:02:59.365 21:18:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:59.365 surplus_hugepages=0 00:02:59.365 21:18:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:59.365 anon_hugepages=0 00:02:59.365 21:18:22 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:59.365 21:18:22 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:59.365 21:18:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:59.365 21:18:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:59.365 21:18:22 -- setup/common.sh@18 -- # local node= 00:02:59.365 21:18:22 -- setup/common.sh@19 -- # local var val 00:02:59.365 21:18:22 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.365 21:18:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.365 21:18:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.365 21:18:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.365 21:18:22 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.365 21:18:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39953536 kB' 'MemAvailable: 44586816 kB' 'Buffers: 2696 kB' 'Cached: 14090592 kB' 'SwapCached: 0 kB' 'Active: 11025600 kB' 'Inactive: 3664300 kB' 'Active(anon): 9914444 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 599844 kB' 'Mapped: 214092 kB' 'Shmem: 9317832 kB' 'KReclaimable: 505532 kB' 'Slab: 1159256 kB' 'SReclaimable: 505532 kB' 'SUnreclaim: 653724 kB' 'KernelStack: 22064 kB' 'PageTables: 9184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11310520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217164 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.365 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.365 21:18:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.366 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.366 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.367 21:18:22 -- setup/common.sh@33 -- # echo 1024 00:02:59.367 21:18:22 -- setup/common.sh@33 -- # return 0 00:02:59.367 21:18:22 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:59.367 21:18:22 -- setup/hugepages.sh@112 -- # get_nodes 00:02:59.367 21:18:22 -- setup/hugepages.sh@27 -- # local node 00:02:59.367 21:18:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.367 21:18:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:59.367 21:18:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.367 21:18:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:59.367 21:18:22 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:59.367 21:18:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:59.367 21:18:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.367 21:18:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.367 21:18:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:59.367 21:18:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.367 21:18:22 -- setup/common.sh@18 -- # local node=0 00:02:59.367 21:18:22 -- setup/common.sh@19 -- # local var val 00:02:59.367 21:18:22 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.367 21:18:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.367 21:18:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:59.367 21:18:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:59.367 21:18:22 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.367 21:18:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 20237040 kB' 'MemUsed: 12402100 kB' 'SwapCached: 0 kB' 'Active: 6746392 kB' 'Inactive: 3290148 kB' 'Active(anon): 6212260 kB' 'Inactive(anon): 0 kB' 'Active(file): 534132 kB' 'Inactive(file): 3290148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9644620 kB' 'Mapped: 129040 kB' 'AnonPages: 395052 kB' 'Shmem: 5820340 kB' 'KernelStack: 11976 kB' 'PageTables: 4864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334468 kB' 'Slab: 651448 kB' 'SReclaimable: 334468 kB' 'SUnreclaim: 316980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.367 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@33 -- # echo 0 00:02:59.368 21:18:22 -- setup/common.sh@33 -- # return 0 00:02:59.368 21:18:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.368 21:18:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.368 21:18:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.368 21:18:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:59.368 21:18:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.368 21:18:22 -- setup/common.sh@18 -- # local node=1 00:02:59.368 21:18:22 -- setup/common.sh@19 -- # local var val 00:02:59.368 21:18:22 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.368 21:18:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.368 21:18:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:59.368 21:18:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:59.368 21:18:22 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.368 21:18:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656076 kB' 'MemFree: 19715740 kB' 'MemUsed: 7940336 kB' 'SwapCached: 0 kB' 'Active: 4279832 kB' 'Inactive: 374152 kB' 'Active(anon): 3702808 kB' 'Inactive(anon): 0 kB' 'Active(file): 577024 kB' 'Inactive(file): 374152 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4448692 kB' 'Mapped: 85052 kB' 'AnonPages: 205368 kB' 'Shmem: 3497516 kB' 'KernelStack: 10136 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 171064 kB' 'Slab: 507808 kB' 'SReclaimable: 171064 kB' 'SUnreclaim: 336744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.369 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.369 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.369 21:18:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.369 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.369 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.369 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.369 21:18:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.369 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.369 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.369 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.369 21:18:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.369 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.369 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.369 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.369 21:18:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.369 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.369 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.369 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.369 21:18:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.369 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.369 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.369 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.369 21:18:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.369 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.369 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.369 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.369 21:18:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.369 21:18:22 -- setup/common.sh@32 -- # continue 00:02:59.369 21:18:22 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.369 21:18:22 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.369 21:18:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.369 21:18:22 -- setup/common.sh@33 -- # echo 0 00:02:59.369 21:18:22 -- setup/common.sh@33 -- # return 0 00:02:59.369 21:18:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.369 21:18:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.369 21:18:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.369 21:18:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.369 21:18:22 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:59.369 node0=512 expecting 512 00:02:59.369 21:18:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.369 21:18:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.369 21:18:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.369 21:18:22 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:59.369 node1=512 expecting 512 00:02:59.369 21:18:22 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:59.369 00:02:59.369 real 0m3.495s 00:02:59.369 user 0m1.252s 00:02:59.369 sys 0m2.299s 00:02:59.369 21:18:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:59.369 21:18:22 -- common/autotest_common.sh@10 -- # set +x 00:02:59.369 ************************************ 00:02:59.369 END TEST per_node_1G_alloc 00:02:59.369 ************************************ 00:02:59.369 21:18:22 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:59.369 21:18:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:59.369 21:18:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:59.369 21:18:22 -- common/autotest_common.sh@10 -- # set +x 00:02:59.629 ************************************ 00:02:59.629 START TEST even_2G_alloc 00:02:59.629 ************************************ 00:02:59.629 21:18:22 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:02:59.629 21:18:22 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:59.629 21:18:22 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:59.629 21:18:22 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:59.629 21:18:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:59.629 21:18:22 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:59.629 21:18:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:59.629 21:18:22 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:59.629 21:18:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:59.629 21:18:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:59.629 21:18:22 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:59.629 21:18:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:59.629 21:18:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:59.629 21:18:22 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:59.629 21:18:22 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:59.629 21:18:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:59.629 21:18:22 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:59.629 21:18:22 -- setup/hugepages.sh@83 -- # : 512 00:02:59.629 21:18:22 -- setup/hugepages.sh@84 -- # : 1 00:02:59.629 21:18:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:59.629 21:18:22 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:59.629 21:18:22 -- setup/hugepages.sh@83 -- # : 0 00:02:59.629 21:18:22 -- setup/hugepages.sh@84 -- # : 0 00:02:59.629 21:18:22 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:59.629 21:18:22 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:59.629 21:18:22 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:59.629 21:18:22 -- setup/hugepages.sh@153 -- # setup output 00:02:59.629 21:18:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.629 21:18:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:02.925 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:02.925 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:02.925 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:02.925 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:02.925 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:02.925 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:02.925 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:02.925 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:02.925 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:02.925 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:02.925 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:02.925 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:02.925 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:02.925 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:02.925 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:02.925 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:02.925 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:02.925 21:18:25 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:02.925 21:18:25 -- setup/hugepages.sh@89 -- # local node 00:03:02.925 21:18:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:02.925 21:18:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:02.925 21:18:25 -- setup/hugepages.sh@92 -- # local surp 00:03:02.925 21:18:25 -- setup/hugepages.sh@93 -- # local resv 00:03:02.925 21:18:25 -- setup/hugepages.sh@94 -- # local anon 00:03:02.925 21:18:25 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:02.925 21:18:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:02.925 21:18:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:02.925 21:18:25 -- setup/common.sh@18 -- # local node= 00:03:02.925 21:18:25 -- setup/common.sh@19 -- # local var val 00:03:02.925 21:18:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.925 21:18:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.925 21:18:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.925 21:18:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.925 21:18:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.925 21:18:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.925 21:18:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39956928 kB' 'MemAvailable: 44590208 kB' 'Buffers: 2696 kB' 'Cached: 14090688 kB' 'SwapCached: 0 kB' 'Active: 11027124 kB' 'Inactive: 3664300 kB' 'Active(anon): 9915968 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 601212 kB' 'Mapped: 214140 kB' 'Shmem: 9317928 kB' 'KReclaimable: 505532 kB' 'Slab: 1158868 kB' 'SReclaimable: 505532 kB' 'SUnreclaim: 653336 kB' 'KernelStack: 22128 kB' 'PageTables: 9440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11311652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217260 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.925 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.925 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.926 21:18:25 -- setup/common.sh@33 -- # echo 0 00:03:02.926 21:18:25 -- setup/common.sh@33 -- # return 0 00:03:02.926 21:18:25 -- setup/hugepages.sh@97 -- # anon=0 00:03:02.926 21:18:25 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:02.926 21:18:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.926 21:18:25 -- setup/common.sh@18 -- # local node= 00:03:02.926 21:18:25 -- setup/common.sh@19 -- # local var val 00:03:02.926 21:18:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.926 21:18:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.926 21:18:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.926 21:18:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.926 21:18:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.926 21:18:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39957196 kB' 'MemAvailable: 44590476 kB' 'Buffers: 2696 kB' 'Cached: 14090688 kB' 'SwapCached: 0 kB' 'Active: 11027080 kB' 'Inactive: 3664300 kB' 'Active(anon): 9915924 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 601156 kB' 'Mapped: 214136 kB' 'Shmem: 9317928 kB' 'KReclaimable: 505532 kB' 'Slab: 1158860 kB' 'SReclaimable: 505532 kB' 'SUnreclaim: 653328 kB' 'KernelStack: 22096 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11311664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217244 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.926 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.926 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.927 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.927 21:18:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.927 21:18:25 -- setup/common.sh@33 -- # echo 0 00:03:02.927 21:18:25 -- setup/common.sh@33 -- # return 0 00:03:02.927 21:18:25 -- setup/hugepages.sh@99 -- # surp=0 00:03:02.927 21:18:25 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:02.927 21:18:25 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:02.928 21:18:25 -- setup/common.sh@18 -- # local node= 00:03:02.928 21:18:25 -- setup/common.sh@19 -- # local var val 00:03:02.928 21:18:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.928 21:18:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.928 21:18:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.928 21:18:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.928 21:18:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.928 21:18:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39958364 kB' 'MemAvailable: 44591644 kB' 'Buffers: 2696 kB' 'Cached: 14090700 kB' 'SwapCached: 0 kB' 'Active: 11026792 kB' 'Inactive: 3664300 kB' 'Active(anon): 9915636 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 600844 kB' 'Mapped: 214136 kB' 'Shmem: 9317940 kB' 'KReclaimable: 505532 kB' 'Slab: 1158920 kB' 'SReclaimable: 505532 kB' 'SUnreclaim: 653388 kB' 'KernelStack: 22112 kB' 'PageTables: 9380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11311676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217244 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.928 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.928 21:18:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.929 21:18:25 -- setup/common.sh@33 -- # echo 0 00:03:02.929 21:18:25 -- setup/common.sh@33 -- # return 0 00:03:02.929 21:18:25 -- setup/hugepages.sh@100 -- # resv=0 00:03:02.929 21:18:25 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:02.929 nr_hugepages=1024 00:03:02.929 21:18:25 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:02.929 resv_hugepages=0 00:03:02.929 21:18:25 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:02.929 surplus_hugepages=0 00:03:02.929 21:18:25 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:02.929 anon_hugepages=0 00:03:02.929 21:18:25 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:02.929 21:18:25 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:02.929 21:18:25 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:02.929 21:18:25 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:02.929 21:18:25 -- setup/common.sh@18 -- # local node= 00:03:02.929 21:18:25 -- setup/common.sh@19 -- # local var val 00:03:02.929 21:18:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.929 21:18:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.929 21:18:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.929 21:18:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.929 21:18:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.929 21:18:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39958684 kB' 'MemAvailable: 44591964 kB' 'Buffers: 2696 kB' 'Cached: 14090716 kB' 'SwapCached: 0 kB' 'Active: 11026812 kB' 'Inactive: 3664300 kB' 'Active(anon): 9915656 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 600840 kB' 'Mapped: 214136 kB' 'Shmem: 9317956 kB' 'KReclaimable: 505532 kB' 'Slab: 1158920 kB' 'SReclaimable: 505532 kB' 'SUnreclaim: 653388 kB' 'KernelStack: 22112 kB' 'PageTables: 9380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11311692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217244 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.929 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.929 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.930 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.930 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.931 21:18:25 -- setup/common.sh@33 -- # echo 1024 00:03:02.931 21:18:25 -- setup/common.sh@33 -- # return 0 00:03:02.931 21:18:25 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:02.931 21:18:25 -- setup/hugepages.sh@112 -- # get_nodes 00:03:02.931 21:18:25 -- setup/hugepages.sh@27 -- # local node 00:03:02.931 21:18:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.931 21:18:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:02.931 21:18:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.931 21:18:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:02.931 21:18:25 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:02.931 21:18:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:02.931 21:18:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:02.931 21:18:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:02.931 21:18:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:02.931 21:18:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.931 21:18:25 -- setup/common.sh@18 -- # local node=0 00:03:02.931 21:18:25 -- setup/common.sh@19 -- # local var val 00:03:02.931 21:18:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.931 21:18:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.931 21:18:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:02.931 21:18:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:02.931 21:18:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.931 21:18:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 20253576 kB' 'MemUsed: 12385564 kB' 'SwapCached: 0 kB' 'Active: 6746744 kB' 'Inactive: 3290148 kB' 'Active(anon): 6212612 kB' 'Inactive(anon): 0 kB' 'Active(file): 534132 kB' 'Inactive(file): 3290148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9644700 kB' 'Mapped: 129060 kB' 'AnonPages: 395316 kB' 'Shmem: 5820420 kB' 'KernelStack: 11960 kB' 'PageTables: 4812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334468 kB' 'Slab: 651324 kB' 'SReclaimable: 334468 kB' 'SUnreclaim: 316856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.931 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.931 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@33 -- # echo 0 00:03:02.932 21:18:25 -- setup/common.sh@33 -- # return 0 00:03:02.932 21:18:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:02.932 21:18:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:02.932 21:18:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:02.932 21:18:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:02.932 21:18:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.932 21:18:25 -- setup/common.sh@18 -- # local node=1 00:03:02.932 21:18:25 -- setup/common.sh@19 -- # local var val 00:03:02.932 21:18:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.932 21:18:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.932 21:18:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:02.932 21:18:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:02.932 21:18:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.932 21:18:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656076 kB' 'MemFree: 19705832 kB' 'MemUsed: 7950244 kB' 'SwapCached: 0 kB' 'Active: 4280384 kB' 'Inactive: 374152 kB' 'Active(anon): 3703360 kB' 'Inactive(anon): 0 kB' 'Active(file): 577024 kB' 'Inactive(file): 374152 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4448724 kB' 'Mapped: 85076 kB' 'AnonPages: 205844 kB' 'Shmem: 3497548 kB' 'KernelStack: 10152 kB' 'PageTables: 4568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 171064 kB' 'Slab: 507596 kB' 'SReclaimable: 171064 kB' 'SUnreclaim: 336532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.932 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.932 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.933 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.933 21:18:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.933 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.933 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.933 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.933 21:18:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.933 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.933 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.933 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.933 21:18:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.933 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.933 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.933 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.933 21:18:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.933 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.933 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.933 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.933 21:18:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.933 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.933 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.933 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.933 21:18:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.933 21:18:25 -- setup/common.sh@32 -- # continue 00:03:02.933 21:18:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.933 21:18:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.933 21:18:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.933 21:18:25 -- setup/common.sh@33 -- # echo 0 00:03:02.933 21:18:25 -- setup/common.sh@33 -- # return 0 00:03:02.933 21:18:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:02.933 21:18:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.933 21:18:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.933 21:18:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.933 21:18:25 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:02.933 node0=512 expecting 512 00:03:02.933 21:18:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.933 21:18:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.933 21:18:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.933 21:18:25 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:02.933 node1=512 expecting 512 00:03:02.933 21:18:25 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:02.933 00:03:02.933 real 0m3.123s 00:03:02.933 user 0m1.109s 00:03:02.933 sys 0m2.010s 00:03:02.933 21:18:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:02.933 21:18:25 -- common/autotest_common.sh@10 -- # set +x 00:03:02.933 ************************************ 00:03:02.933 END TEST even_2G_alloc 00:03:02.933 ************************************ 00:03:02.933 21:18:25 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:02.933 21:18:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:02.933 21:18:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:02.933 21:18:25 -- common/autotest_common.sh@10 -- # set +x 00:03:02.933 ************************************ 00:03:02.933 START TEST odd_alloc 00:03:02.933 ************************************ 00:03:02.933 21:18:25 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:02.933 21:18:25 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:02.933 21:18:25 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:02.933 21:18:25 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:02.933 21:18:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:02.933 21:18:25 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:02.933 21:18:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:02.933 21:18:25 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:02.933 21:18:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:02.933 21:18:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:02.933 21:18:25 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:02.933 21:18:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:02.933 21:18:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:02.933 21:18:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:02.933 21:18:25 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:02.933 21:18:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:02.933 21:18:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:02.933 21:18:25 -- setup/hugepages.sh@83 -- # : 513 00:03:02.933 21:18:25 -- setup/hugepages.sh@84 -- # : 1 00:03:02.933 21:18:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:02.933 21:18:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:02.933 21:18:25 -- setup/hugepages.sh@83 -- # : 0 00:03:02.933 21:18:25 -- setup/hugepages.sh@84 -- # : 0 00:03:02.933 21:18:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:02.933 21:18:25 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:02.933 21:18:25 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:02.933 21:18:25 -- setup/hugepages.sh@160 -- # setup output 00:03:02.933 21:18:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.933 21:18:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.472 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:05.472 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:05.472 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:05.472 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:05.472 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:05.472 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:05.472 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:05.472 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:05.472 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:05.472 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:05.472 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:05.472 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:05.472 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:05.472 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:05.472 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:05.472 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:05.472 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:05.736 21:18:28 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:05.736 21:18:28 -- setup/hugepages.sh@89 -- # local node 00:03:05.736 21:18:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:05.736 21:18:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:05.736 21:18:28 -- setup/hugepages.sh@92 -- # local surp 00:03:05.736 21:18:28 -- setup/hugepages.sh@93 -- # local resv 00:03:05.736 21:18:28 -- setup/hugepages.sh@94 -- # local anon 00:03:05.736 21:18:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:05.736 21:18:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:05.736 21:18:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:05.736 21:18:28 -- setup/common.sh@18 -- # local node= 00:03:05.736 21:18:28 -- setup/common.sh@19 -- # local var val 00:03:05.736 21:18:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.736 21:18:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.736 21:18:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.736 21:18:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.736 21:18:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.736 21:18:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.736 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.736 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.736 21:18:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39965372 kB' 'MemAvailable: 44598652 kB' 'Buffers: 2696 kB' 'Cached: 14090808 kB' 'SwapCached: 0 kB' 'Active: 11033216 kB' 'Inactive: 3664300 kB' 'Active(anon): 9922060 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607804 kB' 'Mapped: 214640 kB' 'Shmem: 9318048 kB' 'KReclaimable: 505532 kB' 'Slab: 1158548 kB' 'SReclaimable: 505532 kB' 'SUnreclaim: 653016 kB' 'KernelStack: 22128 kB' 'PageTables: 9436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11318240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217168 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:05.736 21:18:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.736 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.736 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.736 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.736 21:18:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.736 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.736 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.737 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.737 21:18:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.737 21:18:28 -- setup/common.sh@33 -- # echo 0 00:03:05.737 21:18:28 -- setup/common.sh@33 -- # return 0 00:03:05.737 21:18:28 -- setup/hugepages.sh@97 -- # anon=0 00:03:05.737 21:18:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:05.738 21:18:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.738 21:18:28 -- setup/common.sh@18 -- # local node= 00:03:05.738 21:18:28 -- setup/common.sh@19 -- # local var val 00:03:05.738 21:18:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.738 21:18:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.738 21:18:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.738 21:18:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.738 21:18:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.738 21:18:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39965884 kB' 'MemAvailable: 44599164 kB' 'Buffers: 2696 kB' 'Cached: 14090812 kB' 'SwapCached: 0 kB' 'Active: 11028716 kB' 'Inactive: 3664300 kB' 'Active(anon): 9917560 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 602912 kB' 'Mapped: 214964 kB' 'Shmem: 9318052 kB' 'KReclaimable: 505532 kB' 'Slab: 1158528 kB' 'SReclaimable: 505532 kB' 'SUnreclaim: 652996 kB' 'KernelStack: 22112 kB' 'PageTables: 9404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11313620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217132 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.738 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.738 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.739 21:18:28 -- setup/common.sh@33 -- # echo 0 00:03:05.739 21:18:28 -- setup/common.sh@33 -- # return 0 00:03:05.739 21:18:28 -- setup/hugepages.sh@99 -- # surp=0 00:03:05.739 21:18:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:05.739 21:18:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:05.739 21:18:28 -- setup/common.sh@18 -- # local node= 00:03:05.739 21:18:28 -- setup/common.sh@19 -- # local var val 00:03:05.739 21:18:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.739 21:18:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.739 21:18:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.739 21:18:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.739 21:18:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.739 21:18:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39966208 kB' 'MemAvailable: 44599488 kB' 'Buffers: 2696 kB' 'Cached: 14090824 kB' 'SwapCached: 0 kB' 'Active: 11032932 kB' 'Inactive: 3664300 kB' 'Active(anon): 9921776 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607552 kB' 'Mapped: 214632 kB' 'Shmem: 9318064 kB' 'KReclaimable: 505532 kB' 'Slab: 1158528 kB' 'SReclaimable: 505532 kB' 'SUnreclaim: 652996 kB' 'KernelStack: 22112 kB' 'PageTables: 9380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11318268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217120 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.739 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.739 21:18:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.740 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.740 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.741 21:18:28 -- setup/common.sh@33 -- # echo 0 00:03:05.741 21:18:28 -- setup/common.sh@33 -- # return 0 00:03:05.741 21:18:28 -- setup/hugepages.sh@100 -- # resv=0 00:03:05.741 21:18:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:05.741 nr_hugepages=1025 00:03:05.741 21:18:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:05.741 resv_hugepages=0 00:03:05.741 21:18:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:05.741 surplus_hugepages=0 00:03:05.741 21:18:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:05.741 anon_hugepages=0 00:03:05.741 21:18:28 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:05.741 21:18:28 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:05.741 21:18:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:05.741 21:18:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:05.741 21:18:28 -- setup/common.sh@18 -- # local node= 00:03:05.741 21:18:28 -- setup/common.sh@19 -- # local var val 00:03:05.741 21:18:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.741 21:18:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.741 21:18:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.741 21:18:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.741 21:18:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.741 21:18:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39965704 kB' 'MemAvailable: 44598984 kB' 'Buffers: 2696 kB' 'Cached: 14090836 kB' 'SwapCached: 0 kB' 'Active: 11027936 kB' 'Inactive: 3664300 kB' 'Active(anon): 9916780 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 602072 kB' 'Mapped: 214520 kB' 'Shmem: 9318076 kB' 'KReclaimable: 505532 kB' 'Slab: 1158528 kB' 'SReclaimable: 505532 kB' 'SUnreclaim: 652996 kB' 'KernelStack: 22112 kB' 'PageTables: 9404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11312160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217132 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.741 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.741 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.742 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.742 21:18:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.742 21:18:28 -- setup/common.sh@33 -- # echo 1025 00:03:05.742 21:18:28 -- setup/common.sh@33 -- # return 0 00:03:05.742 21:18:28 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:05.743 21:18:28 -- setup/hugepages.sh@112 -- # get_nodes 00:03:05.743 21:18:28 -- setup/hugepages.sh@27 -- # local node 00:03:05.743 21:18:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.743 21:18:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:05.743 21:18:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.743 21:18:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:05.743 21:18:28 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:05.743 21:18:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:05.743 21:18:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.743 21:18:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.743 21:18:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:05.743 21:18:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.743 21:18:28 -- setup/common.sh@18 -- # local node=0 00:03:05.743 21:18:28 -- setup/common.sh@19 -- # local var val 00:03:05.743 21:18:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.743 21:18:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.743 21:18:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:05.743 21:18:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:05.743 21:18:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.743 21:18:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 20269208 kB' 'MemUsed: 12369932 kB' 'SwapCached: 0 kB' 'Active: 6747088 kB' 'Inactive: 3290148 kB' 'Active(anon): 6212956 kB' 'Inactive(anon): 0 kB' 'Active(file): 534132 kB' 'Inactive(file): 3290148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9644768 kB' 'Mapped: 129028 kB' 'AnonPages: 395632 kB' 'Shmem: 5820488 kB' 'KernelStack: 11960 kB' 'PageTables: 4816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334468 kB' 'Slab: 651040 kB' 'SReclaimable: 334468 kB' 'SUnreclaim: 316572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.743 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.743 21:18:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@33 -- # echo 0 00:03:05.744 21:18:28 -- setup/common.sh@33 -- # return 0 00:03:05.744 21:18:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.744 21:18:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.744 21:18:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.744 21:18:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:05.744 21:18:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.744 21:18:28 -- setup/common.sh@18 -- # local node=1 00:03:05.744 21:18:28 -- setup/common.sh@19 -- # local var val 00:03:05.744 21:18:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.744 21:18:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.744 21:18:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:05.744 21:18:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:05.744 21:18:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.744 21:18:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656076 kB' 'MemFree: 19705568 kB' 'MemUsed: 7950508 kB' 'SwapCached: 0 kB' 'Active: 4280964 kB' 'Inactive: 374152 kB' 'Active(anon): 3703940 kB' 'Inactive(anon): 0 kB' 'Active(file): 577024 kB' 'Inactive(file): 374152 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4448764 kB' 'Mapped: 85100 kB' 'AnonPages: 206624 kB' 'Shmem: 3497588 kB' 'KernelStack: 10200 kB' 'PageTables: 4688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 171064 kB' 'Slab: 507488 kB' 'SReclaimable: 171064 kB' 'SUnreclaim: 336424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.744 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.744 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # continue 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.745 21:18:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.745 21:18:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.745 21:18:28 -- setup/common.sh@33 -- # echo 0 00:03:05.745 21:18:28 -- setup/common.sh@33 -- # return 0 00:03:05.745 21:18:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.745 21:18:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.745 21:18:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.745 21:18:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.745 21:18:28 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:05.745 node0=512 expecting 513 00:03:05.745 21:18:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.745 21:18:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.745 21:18:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.745 21:18:28 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:05.745 node1=513 expecting 512 00:03:05.745 21:18:28 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:05.745 00:03:05.745 real 0m2.970s 00:03:05.745 user 0m0.990s 00:03:05.745 sys 0m1.947s 00:03:05.745 21:18:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:05.745 21:18:28 -- common/autotest_common.sh@10 -- # set +x 00:03:05.745 ************************************ 00:03:05.745 END TEST odd_alloc 00:03:05.745 ************************************ 00:03:06.005 21:18:28 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:06.005 21:18:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:06.005 21:18:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:06.005 21:18:28 -- common/autotest_common.sh@10 -- # set +x 00:03:06.005 ************************************ 00:03:06.005 START TEST custom_alloc 00:03:06.005 ************************************ 00:03:06.005 21:18:28 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:06.005 21:18:28 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:06.005 21:18:28 -- setup/hugepages.sh@169 -- # local node 00:03:06.005 21:18:28 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:06.005 21:18:28 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:06.005 21:18:28 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:06.005 21:18:28 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:06.005 21:18:28 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:06.005 21:18:28 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:06.005 21:18:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.005 21:18:28 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:06.005 21:18:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:06.005 21:18:28 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:06.005 21:18:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.005 21:18:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:06.005 21:18:28 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.005 21:18:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.005 21:18:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.005 21:18:28 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:06.005 21:18:28 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:06.005 21:18:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:06.005 21:18:28 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:06.005 21:18:28 -- setup/hugepages.sh@83 -- # : 256 00:03:06.005 21:18:28 -- setup/hugepages.sh@84 -- # : 1 00:03:06.005 21:18:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:06.005 21:18:28 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:06.005 21:18:28 -- setup/hugepages.sh@83 -- # : 0 00:03:06.005 21:18:28 -- setup/hugepages.sh@84 -- # : 0 00:03:06.005 21:18:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:06.005 21:18:28 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:06.005 21:18:28 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:06.005 21:18:28 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:06.005 21:18:28 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:06.005 21:18:28 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:06.005 21:18:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.005 21:18:28 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:06.005 21:18:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:06.005 21:18:28 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:06.005 21:18:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.005 21:18:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:06.005 21:18:28 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.005 21:18:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.005 21:18:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.005 21:18:28 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:06.005 21:18:28 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:06.005 21:18:28 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:06.005 21:18:28 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:06.005 21:18:28 -- setup/hugepages.sh@78 -- # return 0 00:03:06.005 21:18:28 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:06.005 21:18:28 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:06.005 21:18:28 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:06.005 21:18:28 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:06.005 21:18:28 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:06.005 21:18:28 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:06.005 21:18:28 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:06.005 21:18:28 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:06.005 21:18:28 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:06.005 21:18:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.005 21:18:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:06.005 21:18:28 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.005 21:18:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.005 21:18:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.005 21:18:28 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:06.005 21:18:28 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:06.005 21:18:28 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:06.005 21:18:28 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:06.005 21:18:28 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:06.005 21:18:28 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:06.005 21:18:28 -- setup/hugepages.sh@78 -- # return 0 00:03:06.005 21:18:28 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:06.005 21:18:28 -- setup/hugepages.sh@187 -- # setup output 00:03:06.005 21:18:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.005 21:18:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:09.326 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:09.326 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:09.326 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:09.326 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:09.326 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:09.326 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:09.326 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:09.326 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:09.326 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:09.326 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:09.326 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:09.326 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:09.326 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:09.326 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:09.326 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:09.326 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:09.326 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:09.326 21:18:31 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:09.326 21:18:31 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:09.326 21:18:31 -- setup/hugepages.sh@89 -- # local node 00:03:09.326 21:18:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:09.326 21:18:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:09.326 21:18:31 -- setup/hugepages.sh@92 -- # local surp 00:03:09.326 21:18:31 -- setup/hugepages.sh@93 -- # local resv 00:03:09.326 21:18:31 -- setup/hugepages.sh@94 -- # local anon 00:03:09.326 21:18:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:09.326 21:18:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:09.326 21:18:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:09.326 21:18:31 -- setup/common.sh@18 -- # local node= 00:03:09.326 21:18:31 -- setup/common.sh@19 -- # local var val 00:03:09.326 21:18:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.326 21:18:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.326 21:18:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.326 21:18:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.326 21:18:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.326 21:18:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.326 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.326 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.326 21:18:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 38943064 kB' 'MemAvailable: 43576280 kB' 'Buffers: 2696 kB' 'Cached: 14090944 kB' 'SwapCached: 0 kB' 'Active: 11029180 kB' 'Inactive: 3664300 kB' 'Active(anon): 9918024 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603180 kB' 'Mapped: 214720 kB' 'Shmem: 9318184 kB' 'KReclaimable: 505468 kB' 'Slab: 1158264 kB' 'SReclaimable: 505468 kB' 'SUnreclaim: 652796 kB' 'KernelStack: 22112 kB' 'PageTables: 9380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11314388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217148 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:09.326 21:18:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.326 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.326 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.326 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.326 21:18:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.326 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.326 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.326 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.326 21:18:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.326 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.326 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.326 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.326 21:18:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.326 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.326 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.326 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.326 21:18:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.326 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.326 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.326 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.326 21:18:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.326 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.326 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.326 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.326 21:18:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.326 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.326 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.326 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.326 21:18:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.326 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.326 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.326 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.326 21:18:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.326 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.327 21:18:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.327 21:18:31 -- setup/common.sh@33 -- # echo 0 00:03:09.327 21:18:31 -- setup/common.sh@33 -- # return 0 00:03:09.327 21:18:31 -- setup/hugepages.sh@97 -- # anon=0 00:03:09.327 21:18:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:09.327 21:18:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.327 21:18:31 -- setup/common.sh@18 -- # local node= 00:03:09.327 21:18:31 -- setup/common.sh@19 -- # local var val 00:03:09.327 21:18:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.327 21:18:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.327 21:18:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.327 21:18:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.327 21:18:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.327 21:18:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.327 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 38936724 kB' 'MemAvailable: 43569940 kB' 'Buffers: 2696 kB' 'Cached: 14090944 kB' 'SwapCached: 0 kB' 'Active: 11032900 kB' 'Inactive: 3664300 kB' 'Active(anon): 9921744 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607420 kB' 'Mapped: 214680 kB' 'Shmem: 9318184 kB' 'KReclaimable: 505468 kB' 'Slab: 1158312 kB' 'SReclaimable: 505468 kB' 'SUnreclaim: 652844 kB' 'KernelStack: 22112 kB' 'PageTables: 9396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11318768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217136 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.328 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.328 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.329 21:18:31 -- setup/common.sh@33 -- # echo 0 00:03:09.329 21:18:31 -- setup/common.sh@33 -- # return 0 00:03:09.329 21:18:31 -- setup/hugepages.sh@99 -- # surp=0 00:03:09.329 21:18:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:09.329 21:18:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:09.329 21:18:31 -- setup/common.sh@18 -- # local node= 00:03:09.329 21:18:31 -- setup/common.sh@19 -- # local var val 00:03:09.329 21:18:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.329 21:18:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.329 21:18:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.329 21:18:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.329 21:18:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.329 21:18:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 38937228 kB' 'MemAvailable: 43570444 kB' 'Buffers: 2696 kB' 'Cached: 14090944 kB' 'SwapCached: 0 kB' 'Active: 11033460 kB' 'Inactive: 3664300 kB' 'Active(anon): 9922304 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607484 kB' 'Mapped: 215072 kB' 'Shmem: 9318184 kB' 'KReclaimable: 505468 kB' 'Slab: 1158312 kB' 'SReclaimable: 505468 kB' 'SUnreclaim: 652844 kB' 'KernelStack: 22112 kB' 'PageTables: 9420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11318784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217136 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.329 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.329 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.330 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.330 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.331 21:18:31 -- setup/common.sh@33 -- # echo 0 00:03:09.331 21:18:31 -- setup/common.sh@33 -- # return 0 00:03:09.331 21:18:31 -- setup/hugepages.sh@100 -- # resv=0 00:03:09.331 21:18:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:09.331 nr_hugepages=1536 00:03:09.331 21:18:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:09.331 resv_hugepages=0 00:03:09.331 21:18:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:09.331 surplus_hugepages=0 00:03:09.331 21:18:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:09.331 anon_hugepages=0 00:03:09.331 21:18:31 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:09.331 21:18:31 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:09.331 21:18:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:09.331 21:18:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:09.331 21:18:31 -- setup/common.sh@18 -- # local node= 00:03:09.331 21:18:31 -- setup/common.sh@19 -- # local var val 00:03:09.331 21:18:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.331 21:18:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.331 21:18:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.331 21:18:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.331 21:18:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.331 21:18:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.331 21:18:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 38944648 kB' 'MemAvailable: 43577864 kB' 'Buffers: 2696 kB' 'Cached: 14090972 kB' 'SwapCached: 0 kB' 'Active: 11030116 kB' 'Inactive: 3664300 kB' 'Active(anon): 9918960 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 604076 kB' 'Mapped: 214568 kB' 'Shmem: 9318212 kB' 'KReclaimable: 505468 kB' 'Slab: 1158312 kB' 'SReclaimable: 505468 kB' 'SUnreclaim: 652844 kB' 'KernelStack: 22128 kB' 'PageTables: 9468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11315616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217148 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.331 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.331 21:18:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.332 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.332 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.333 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.333 21:18:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.334 21:18:31 -- setup/common.sh@33 -- # echo 1536 00:03:09.334 21:18:31 -- setup/common.sh@33 -- # return 0 00:03:09.334 21:18:31 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:09.334 21:18:31 -- setup/hugepages.sh@112 -- # get_nodes 00:03:09.334 21:18:31 -- setup/hugepages.sh@27 -- # local node 00:03:09.334 21:18:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.334 21:18:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:09.334 21:18:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.334 21:18:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:09.334 21:18:31 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:09.334 21:18:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:09.334 21:18:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.334 21:18:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.334 21:18:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:09.334 21:18:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.334 21:18:31 -- setup/common.sh@18 -- # local node=0 00:03:09.334 21:18:31 -- setup/common.sh@19 -- # local var val 00:03:09.334 21:18:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.334 21:18:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.334 21:18:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:09.334 21:18:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:09.334 21:18:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.334 21:18:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.334 21:18:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 20279132 kB' 'MemUsed: 12360008 kB' 'SwapCached: 0 kB' 'Active: 6746648 kB' 'Inactive: 3290148 kB' 'Active(anon): 6212516 kB' 'Inactive(anon): 0 kB' 'Active(file): 534132 kB' 'Inactive(file): 3290148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9644868 kB' 'Mapped: 129048 kB' 'AnonPages: 395108 kB' 'Shmem: 5820588 kB' 'KernelStack: 11960 kB' 'PageTables: 4820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334404 kB' 'Slab: 651176 kB' 'SReclaimable: 334404 kB' 'SUnreclaim: 316772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.334 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.334 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.335 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.335 21:18:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.335 21:18:31 -- setup/common.sh@33 -- # echo 0 00:03:09.335 21:18:31 -- setup/common.sh@33 -- # return 0 00:03:09.335 21:18:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.335 21:18:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.335 21:18:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.335 21:18:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:09.335 21:18:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.335 21:18:31 -- setup/common.sh@18 -- # local node=1 00:03:09.335 21:18:31 -- setup/common.sh@19 -- # local var val 00:03:09.335 21:18:31 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.335 21:18:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.335 21:18:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:09.336 21:18:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:09.336 21:18:31 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.336 21:18:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656076 kB' 'MemFree: 18672788 kB' 'MemUsed: 8983288 kB' 'SwapCached: 0 kB' 'Active: 4281388 kB' 'Inactive: 374152 kB' 'Active(anon): 3704364 kB' 'Inactive(anon): 0 kB' 'Active(file): 577024 kB' 'Inactive(file): 374152 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4448816 kB' 'Mapped: 85412 kB' 'AnonPages: 206936 kB' 'Shmem: 3497640 kB' 'KernelStack: 10168 kB' 'PageTables: 4652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 171064 kB' 'Slab: 507136 kB' 'SReclaimable: 171064 kB' 'SUnreclaim: 336072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.336 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.336 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # continue 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.337 21:18:31 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.337 21:18:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.337 21:18:31 -- setup/common.sh@33 -- # echo 0 00:03:09.337 21:18:31 -- setup/common.sh@33 -- # return 0 00:03:09.337 21:18:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.337 21:18:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.337 21:18:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.337 21:18:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.337 21:18:31 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:09.337 node0=512 expecting 512 00:03:09.337 21:18:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.337 21:18:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.337 21:18:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.337 21:18:31 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:09.337 node1=1024 expecting 1024 00:03:09.337 21:18:31 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:09.337 00:03:09.337 real 0m3.196s 00:03:09.337 user 0m1.138s 00:03:09.337 sys 0m2.079s 00:03:09.337 21:18:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:09.337 21:18:31 -- common/autotest_common.sh@10 -- # set +x 00:03:09.337 ************************************ 00:03:09.337 END TEST custom_alloc 00:03:09.337 ************************************ 00:03:09.337 21:18:32 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:09.337 21:18:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:09.337 21:18:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:09.337 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:03:09.337 ************************************ 00:03:09.337 START TEST no_shrink_alloc 00:03:09.337 ************************************ 00:03:09.337 21:18:32 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:09.337 21:18:32 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:09.337 21:18:32 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:09.337 21:18:32 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:09.337 21:18:32 -- setup/hugepages.sh@51 -- # shift 00:03:09.337 21:18:32 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:09.337 21:18:32 -- setup/hugepages.sh@52 -- # local node_ids 00:03:09.337 21:18:32 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:09.337 21:18:32 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:09.337 21:18:32 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:09.337 21:18:32 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:09.337 21:18:32 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:09.337 21:18:32 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:09.337 21:18:32 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:09.337 21:18:32 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:09.597 21:18:32 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:09.597 21:18:32 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:09.597 21:18:32 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:09.597 21:18:32 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:09.597 21:18:32 -- setup/hugepages.sh@73 -- # return 0 00:03:09.597 21:18:32 -- setup/hugepages.sh@198 -- # setup output 00:03:09.597 21:18:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.598 21:18:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:12.896 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:12.896 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:12.896 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:12.896 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:12.896 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:12.896 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:12.896 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:12.896 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:12.896 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:12.896 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:12.896 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:12.896 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:12.896 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:12.896 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:12.896 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:12.896 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:12.896 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:12.896 21:18:35 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:12.896 21:18:35 -- setup/hugepages.sh@89 -- # local node 00:03:12.896 21:18:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.896 21:18:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.896 21:18:35 -- setup/hugepages.sh@92 -- # local surp 00:03:12.896 21:18:35 -- setup/hugepages.sh@93 -- # local resv 00:03:12.896 21:18:35 -- setup/hugepages.sh@94 -- # local anon 00:03:12.896 21:18:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.896 21:18:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.896 21:18:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.896 21:18:35 -- setup/common.sh@18 -- # local node= 00:03:12.896 21:18:35 -- setup/common.sh@19 -- # local var val 00:03:12.896 21:18:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.896 21:18:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.896 21:18:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.896 21:18:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.896 21:18:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.896 21:18:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40006064 kB' 'MemAvailable: 44639280 kB' 'Buffers: 2696 kB' 'Cached: 14091068 kB' 'SwapCached: 0 kB' 'Active: 11030060 kB' 'Inactive: 3664300 kB' 'Active(anon): 9918904 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603804 kB' 'Mapped: 214228 kB' 'Shmem: 9318308 kB' 'KReclaimable: 505468 kB' 'Slab: 1158428 kB' 'SReclaimable: 505468 kB' 'SUnreclaim: 652960 kB' 'KernelStack: 22464 kB' 'PageTables: 10060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11316248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217436 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.897 21:18:35 -- setup/common.sh@33 -- # echo 0 00:03:12.897 21:18:35 -- setup/common.sh@33 -- # return 0 00:03:12.897 21:18:35 -- setup/hugepages.sh@97 -- # anon=0 00:03:12.897 21:18:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:12.897 21:18:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.897 21:18:35 -- setup/common.sh@18 -- # local node= 00:03:12.897 21:18:35 -- setup/common.sh@19 -- # local var val 00:03:12.897 21:18:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.897 21:18:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.897 21:18:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.897 21:18:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.897 21:18:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.897 21:18:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40005436 kB' 'MemAvailable: 44638652 kB' 'Buffers: 2696 kB' 'Cached: 14091072 kB' 'SwapCached: 0 kB' 'Active: 11029436 kB' 'Inactive: 3664300 kB' 'Active(anon): 9918280 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603192 kB' 'Mapped: 214216 kB' 'Shmem: 9318312 kB' 'KReclaimable: 505468 kB' 'Slab: 1158352 kB' 'SReclaimable: 505468 kB' 'SUnreclaim: 652884 kB' 'KernelStack: 22240 kB' 'PageTables: 9604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11316260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217356 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.897 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.897 21:18:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.898 21:18:35 -- setup/common.sh@33 -- # echo 0 00:03:12.898 21:18:35 -- setup/common.sh@33 -- # return 0 00:03:12.898 21:18:35 -- setup/hugepages.sh@99 -- # surp=0 00:03:12.898 21:18:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:12.898 21:18:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.898 21:18:35 -- setup/common.sh@18 -- # local node= 00:03:12.898 21:18:35 -- setup/common.sh@19 -- # local var val 00:03:12.898 21:18:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.898 21:18:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.898 21:18:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.898 21:18:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.898 21:18:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.898 21:18:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40006184 kB' 'MemAvailable: 44639400 kB' 'Buffers: 2696 kB' 'Cached: 14091084 kB' 'SwapCached: 0 kB' 'Active: 11029488 kB' 'Inactive: 3664300 kB' 'Active(anon): 9918332 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603236 kB' 'Mapped: 214216 kB' 'Shmem: 9318324 kB' 'KReclaimable: 505468 kB' 'Slab: 1158320 kB' 'SReclaimable: 505468 kB' 'SUnreclaim: 652852 kB' 'KernelStack: 22432 kB' 'PageTables: 10084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11316276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217404 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.898 21:18:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.898 21:18:35 -- setup/common.sh@33 -- # echo 0 00:03:12.898 21:18:35 -- setup/common.sh@33 -- # return 0 00:03:12.898 21:18:35 -- setup/hugepages.sh@100 -- # resv=0 00:03:12.898 21:18:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:12.898 nr_hugepages=1024 00:03:12.898 21:18:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:12.898 resv_hugepages=0 00:03:12.898 21:18:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:12.898 surplus_hugepages=0 00:03:12.898 21:18:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:12.898 anon_hugepages=0 00:03:12.898 21:18:35 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.898 21:18:35 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:12.898 21:18:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:12.898 21:18:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.898 21:18:35 -- setup/common.sh@18 -- # local node= 00:03:12.898 21:18:35 -- setup/common.sh@19 -- # local var val 00:03:12.898 21:18:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.898 21:18:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.898 21:18:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.898 21:18:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.898 21:18:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.898 21:18:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.898 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40006932 kB' 'MemAvailable: 44640148 kB' 'Buffers: 2696 kB' 'Cached: 14091096 kB' 'SwapCached: 0 kB' 'Active: 11029360 kB' 'Inactive: 3664300 kB' 'Active(anon): 9918204 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603064 kB' 'Mapped: 214216 kB' 'Shmem: 9318336 kB' 'KReclaimable: 505468 kB' 'Slab: 1158320 kB' 'SReclaimable: 505468 kB' 'SUnreclaim: 652852 kB' 'KernelStack: 22336 kB' 'PageTables: 9772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11314772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217356 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.899 21:18:35 -- setup/common.sh@33 -- # echo 1024 00:03:12.899 21:18:35 -- setup/common.sh@33 -- # return 0 00:03:12.899 21:18:35 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.899 21:18:35 -- setup/hugepages.sh@112 -- # get_nodes 00:03:12.899 21:18:35 -- setup/hugepages.sh@27 -- # local node 00:03:12.899 21:18:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.899 21:18:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:12.899 21:18:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.899 21:18:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:12.899 21:18:35 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.899 21:18:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.899 21:18:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.899 21:18:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.899 21:18:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:12.899 21:18:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.899 21:18:35 -- setup/common.sh@18 -- # local node=0 00:03:12.899 21:18:35 -- setup/common.sh@19 -- # local var val 00:03:12.899 21:18:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.899 21:18:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.899 21:18:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.899 21:18:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.899 21:18:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.899 21:18:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19228712 kB' 'MemUsed: 13410428 kB' 'SwapCached: 0 kB' 'Active: 6747296 kB' 'Inactive: 3290148 kB' 'Active(anon): 6213164 kB' 'Inactive(anon): 0 kB' 'Active(file): 534132 kB' 'Inactive(file): 3290148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9644932 kB' 'Mapped: 129036 kB' 'AnonPages: 395612 kB' 'Shmem: 5820652 kB' 'KernelStack: 12184 kB' 'PageTables: 5192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334404 kB' 'Slab: 651168 kB' 'SReclaimable: 334404 kB' 'SUnreclaim: 316764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.899 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.899 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # continue 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.900 21:18:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.900 21:18:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.900 21:18:35 -- setup/common.sh@33 -- # echo 0 00:03:12.900 21:18:35 -- setup/common.sh@33 -- # return 0 00:03:12.900 21:18:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.900 21:18:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.900 21:18:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.900 21:18:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.900 21:18:35 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:12.900 node0=1024 expecting 1024 00:03:12.900 21:18:35 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:12.900 21:18:35 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:12.900 21:18:35 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:12.900 21:18:35 -- setup/hugepages.sh@202 -- # setup output 00:03:12.900 21:18:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.900 21:18:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:16.200 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.200 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.200 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.200 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.200 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.200 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.200 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.200 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.200 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.200 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.200 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.200 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.200 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.200 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.200 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.200 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.200 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:16.200 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:16.200 21:18:38 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:16.200 21:18:38 -- setup/hugepages.sh@89 -- # local node 00:03:16.200 21:18:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:16.200 21:18:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:16.200 21:18:38 -- setup/hugepages.sh@92 -- # local surp 00:03:16.200 21:18:38 -- setup/hugepages.sh@93 -- # local resv 00:03:16.200 21:18:38 -- setup/hugepages.sh@94 -- # local anon 00:03:16.200 21:18:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.200 21:18:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:16.200 21:18:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.200 21:18:38 -- setup/common.sh@18 -- # local node= 00:03:16.200 21:18:38 -- setup/common.sh@19 -- # local var val 00:03:16.200 21:18:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.200 21:18:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.200 21:18:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.200 21:18:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.200 21:18:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.200 21:18:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.200 21:18:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40004752 kB' 'MemAvailable: 44637968 kB' 'Buffers: 2696 kB' 'Cached: 14091184 kB' 'SwapCached: 0 kB' 'Active: 11030904 kB' 'Inactive: 3664300 kB' 'Active(anon): 9919748 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 604560 kB' 'Mapped: 214312 kB' 'Shmem: 9318424 kB' 'KReclaimable: 505468 kB' 'Slab: 1159068 kB' 'SReclaimable: 505468 kB' 'SUnreclaim: 653600 kB' 'KernelStack: 22192 kB' 'PageTables: 9312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11315564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217420 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.200 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.200 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.201 21:18:38 -- setup/common.sh@33 -- # echo 0 00:03:16.201 21:18:38 -- setup/common.sh@33 -- # return 0 00:03:16.201 21:18:38 -- setup/hugepages.sh@97 -- # anon=0 00:03:16.201 21:18:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.201 21:18:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.201 21:18:38 -- setup/common.sh@18 -- # local node= 00:03:16.201 21:18:38 -- setup/common.sh@19 -- # local var val 00:03:16.201 21:18:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.201 21:18:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.201 21:18:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.201 21:18:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.201 21:18:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.201 21:18:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40005924 kB' 'MemAvailable: 44639140 kB' 'Buffers: 2696 kB' 'Cached: 14091188 kB' 'SwapCached: 0 kB' 'Active: 11030436 kB' 'Inactive: 3664300 kB' 'Active(anon): 9919280 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 604020 kB' 'Mapped: 214300 kB' 'Shmem: 9318428 kB' 'KReclaimable: 505468 kB' 'Slab: 1159052 kB' 'SReclaimable: 505468 kB' 'SUnreclaim: 653584 kB' 'KernelStack: 22224 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11315576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217308 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.201 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.201 21:18:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.202 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.202 21:18:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.202 21:18:38 -- setup/common.sh@33 -- # echo 0 00:03:16.203 21:18:38 -- setup/common.sh@33 -- # return 0 00:03:16.203 21:18:38 -- setup/hugepages.sh@99 -- # surp=0 00:03:16.203 21:18:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.203 21:18:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.203 21:18:38 -- setup/common.sh@18 -- # local node= 00:03:16.203 21:18:38 -- setup/common.sh@19 -- # local var val 00:03:16.203 21:18:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.203 21:18:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.203 21:18:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.203 21:18:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.203 21:18:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.203 21:18:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40006160 kB' 'MemAvailable: 44639376 kB' 'Buffers: 2696 kB' 'Cached: 14091200 kB' 'SwapCached: 0 kB' 'Active: 11030508 kB' 'Inactive: 3664300 kB' 'Active(anon): 9919352 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 604096 kB' 'Mapped: 214224 kB' 'Shmem: 9318440 kB' 'KReclaimable: 505468 kB' 'Slab: 1158996 kB' 'SReclaimable: 505468 kB' 'SUnreclaim: 653528 kB' 'KernelStack: 22320 kB' 'PageTables: 9812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11317108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217404 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.203 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.203 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.204 21:18:38 -- setup/common.sh@33 -- # echo 0 00:03:16.204 21:18:38 -- setup/common.sh@33 -- # return 0 00:03:16.204 21:18:38 -- setup/hugepages.sh@100 -- # resv=0 00:03:16.204 21:18:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:16.204 nr_hugepages=1024 00:03:16.204 21:18:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.204 resv_hugepages=0 00:03:16.204 21:18:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.204 surplus_hugepages=0 00:03:16.204 21:18:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.204 anon_hugepages=0 00:03:16.204 21:18:38 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.204 21:18:38 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:16.204 21:18:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.204 21:18:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.204 21:18:38 -- setup/common.sh@18 -- # local node= 00:03:16.204 21:18:38 -- setup/common.sh@19 -- # local var val 00:03:16.204 21:18:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.204 21:18:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.204 21:18:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.204 21:18:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.204 21:18:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.204 21:18:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40009788 kB' 'MemAvailable: 44643004 kB' 'Buffers: 2696 kB' 'Cached: 14091212 kB' 'SwapCached: 0 kB' 'Active: 11031236 kB' 'Inactive: 3664300 kB' 'Active(anon): 9920080 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3664300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 604904 kB' 'Mapped: 214224 kB' 'Shmem: 9318452 kB' 'KReclaimable: 505468 kB' 'Slab: 1158996 kB' 'SReclaimable: 505468 kB' 'SUnreclaim: 653528 kB' 'KernelStack: 22272 kB' 'PageTables: 9488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11317120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217356 kB' 'VmallocChunk: 0 kB' 'Percpu: 127680 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3280244 kB' 'DirectMap2M: 16328704 kB' 'DirectMap1G: 49283072 kB' 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.204 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.204 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.205 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.205 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.206 21:18:38 -- setup/common.sh@33 -- # echo 1024 00:03:16.206 21:18:38 -- setup/common.sh@33 -- # return 0 00:03:16.206 21:18:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.206 21:18:38 -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.206 21:18:38 -- setup/hugepages.sh@27 -- # local node 00:03:16.206 21:18:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.206 21:18:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:16.206 21:18:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.206 21:18:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:16.206 21:18:38 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.206 21:18:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.206 21:18:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.206 21:18:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.206 21:18:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.206 21:18:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.206 21:18:38 -- setup/common.sh@18 -- # local node=0 00:03:16.206 21:18:38 -- setup/common.sh@19 -- # local var val 00:03:16.206 21:18:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.206 21:18:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.206 21:18:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.206 21:18:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.206 21:18:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.206 21:18:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19218708 kB' 'MemUsed: 13420432 kB' 'SwapCached: 0 kB' 'Active: 6747744 kB' 'Inactive: 3290148 kB' 'Active(anon): 6213612 kB' 'Inactive(anon): 0 kB' 'Active(file): 534132 kB' 'Inactive(file): 3290148 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9644936 kB' 'Mapped: 129036 kB' 'AnonPages: 396024 kB' 'Shmem: 5820656 kB' 'KernelStack: 12072 kB' 'PageTables: 5228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 334404 kB' 'Slab: 651768 kB' 'SReclaimable: 334404 kB' 'SUnreclaim: 317364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.206 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.206 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.207 21:18:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.207 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.207 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.207 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.207 21:18:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.207 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.207 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.207 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.207 21:18:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.207 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.207 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.207 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.207 21:18:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.207 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.207 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.207 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.207 21:18:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.207 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.207 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.207 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.207 21:18:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.207 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.207 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.207 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.207 21:18:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.207 21:18:38 -- setup/common.sh@32 -- # continue 00:03:16.207 21:18:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.207 21:18:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.207 21:18:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.207 21:18:38 -- setup/common.sh@33 -- # echo 0 00:03:16.207 21:18:38 -- setup/common.sh@33 -- # return 0 00:03:16.207 21:18:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.207 21:18:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.207 21:18:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.207 21:18:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.207 21:18:38 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:16.207 node0=1024 expecting 1024 00:03:16.207 21:18:38 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:16.207 00:03:16.207 real 0m6.469s 00:03:16.207 user 0m2.326s 00:03:16.207 sys 0m4.183s 00:03:16.207 21:18:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:16.207 21:18:38 -- common/autotest_common.sh@10 -- # set +x 00:03:16.207 ************************************ 00:03:16.207 END TEST no_shrink_alloc 00:03:16.207 ************************************ 00:03:16.207 21:18:38 -- setup/hugepages.sh@217 -- # clear_hp 00:03:16.207 21:18:38 -- setup/hugepages.sh@37 -- # local node hp 00:03:16.207 21:18:38 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:16.207 21:18:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:16.207 21:18:38 -- setup/hugepages.sh@41 -- # echo 0 00:03:16.207 21:18:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:16.207 21:18:38 -- setup/hugepages.sh@41 -- # echo 0 00:03:16.207 21:18:38 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:16.207 21:18:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:16.207 21:18:38 -- setup/hugepages.sh@41 -- # echo 0 00:03:16.207 21:18:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:16.207 21:18:38 -- setup/hugepages.sh@41 -- # echo 0 00:03:16.207 21:18:38 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:16.207 21:18:38 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:16.207 00:03:16.207 real 0m25.347s 00:03:16.207 user 0m8.353s 00:03:16.207 sys 0m15.358s 00:03:16.207 21:18:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:16.207 21:18:38 -- common/autotest_common.sh@10 -- # set +x 00:03:16.207 ************************************ 00:03:16.207 END TEST hugepages 00:03:16.207 ************************************ 00:03:16.207 21:18:38 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:16.207 21:18:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:16.207 21:18:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:16.207 21:18:38 -- common/autotest_common.sh@10 -- # set +x 00:03:16.207 ************************************ 00:03:16.207 START TEST driver 00:03:16.207 ************************************ 00:03:16.207 21:18:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:16.207 * Looking for test storage... 00:03:16.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:16.207 21:18:39 -- setup/driver.sh@68 -- # setup reset 00:03:16.207 21:18:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.207 21:18:39 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.488 21:18:43 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:21.488 21:18:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.488 21:18:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.488 21:18:43 -- common/autotest_common.sh@10 -- # set +x 00:03:21.488 ************************************ 00:03:21.488 START TEST guess_driver 00:03:21.489 ************************************ 00:03:21.489 21:18:43 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:21.489 21:18:43 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:21.489 21:18:43 -- setup/driver.sh@47 -- # local fail=0 00:03:21.489 21:18:43 -- setup/driver.sh@49 -- # pick_driver 00:03:21.489 21:18:43 -- setup/driver.sh@36 -- # vfio 00:03:21.489 21:18:43 -- setup/driver.sh@21 -- # local iommu_grups 00:03:21.489 21:18:43 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:21.489 21:18:43 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:21.489 21:18:43 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:21.489 21:18:43 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:21.489 21:18:43 -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:21.489 21:18:43 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:21.489 21:18:43 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:21.489 21:18:43 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:21.489 21:18:43 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:21.489 21:18:43 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:21.489 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:21.489 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:21.489 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:21.489 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:21.489 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:21.489 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:21.489 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:21.489 21:18:43 -- setup/driver.sh@30 -- # return 0 00:03:21.489 21:18:43 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:21.489 21:18:43 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:21.489 21:18:43 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:21.489 21:18:43 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:21.489 Looking for driver=vfio-pci 00:03:21.489 21:18:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:21.489 21:18:43 -- setup/driver.sh@45 -- # setup output config 00:03:21.489 21:18:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.489 21:18:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:24.030 21:18:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.030 21:18:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.030 21:18:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.030 21:18:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.030 21:18:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.030 21:18:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.030 21:18:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.030 21:18:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.030 21:18:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.030 21:18:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.030 21:18:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.030 21:18:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.030 21:18:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.030 21:18:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.030 21:18:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.030 21:18:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.030 21:18:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.030 21:18:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.030 21:18:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.030 21:18:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.030 21:18:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.030 21:18:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.030 21:18:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.030 21:18:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.030 21:18:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.030 21:18:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.030 21:18:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.030 21:18:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.030 21:18:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.030 21:18:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.030 21:18:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.031 21:18:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.031 21:18:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.031 21:18:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.031 21:18:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.031 21:18:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.031 21:18:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.031 21:18:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.031 21:18:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.031 21:18:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.031 21:18:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.031 21:18:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.031 21:18:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.031 21:18:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.031 21:18:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.031 21:18:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:24.031 21:18:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:24.031 21:18:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.940 21:18:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:25.940 21:18:48 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:25.940 21:18:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.940 21:18:48 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:25.940 21:18:48 -- setup/driver.sh@65 -- # setup reset 00:03:25.940 21:18:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.940 21:18:48 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.137 00:03:30.137 real 0m8.843s 00:03:30.137 user 0m2.087s 00:03:30.137 sys 0m4.382s 00:03:30.137 21:18:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:30.137 21:18:52 -- common/autotest_common.sh@10 -- # set +x 00:03:30.137 ************************************ 00:03:30.137 END TEST guess_driver 00:03:30.137 ************************************ 00:03:30.137 00:03:30.137 real 0m13.915s 00:03:30.137 user 0m3.447s 00:03:30.137 sys 0m7.298s 00:03:30.137 21:18:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:30.137 21:18:52 -- common/autotest_common.sh@10 -- # set +x 00:03:30.137 ************************************ 00:03:30.137 END TEST driver 00:03:30.137 ************************************ 00:03:30.137 21:18:52 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:30.137 21:18:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:30.137 21:18:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:30.137 21:18:52 -- common/autotest_common.sh@10 -- # set +x 00:03:30.396 ************************************ 00:03:30.396 START TEST devices 00:03:30.396 ************************************ 00:03:30.396 21:18:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:30.396 * Looking for test storage... 00:03:30.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:30.396 21:18:53 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:30.396 21:18:53 -- setup/devices.sh@192 -- # setup reset 00:03:30.396 21:18:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.396 21:18:53 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.595 21:18:56 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:34.595 21:18:56 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:34.595 21:18:56 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:34.595 21:18:56 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:34.595 21:18:56 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:34.595 21:18:56 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:34.595 21:18:56 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:34.595 21:18:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:34.595 21:18:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:34.595 21:18:56 -- setup/devices.sh@196 -- # blocks=() 00:03:34.595 21:18:56 -- setup/devices.sh@196 -- # declare -a blocks 00:03:34.595 21:18:56 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:34.595 21:18:56 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:34.595 21:18:56 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:34.595 21:18:56 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:34.595 21:18:56 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:34.595 21:18:56 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:34.595 21:18:56 -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:34.595 21:18:56 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:34.595 21:18:56 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:34.595 21:18:56 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:34.595 21:18:56 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:34.595 No valid GPT data, bailing 00:03:34.595 21:18:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:34.595 21:18:56 -- scripts/common.sh@391 -- # pt= 00:03:34.595 21:18:56 -- scripts/common.sh@392 -- # return 1 00:03:34.595 21:18:56 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:34.595 21:18:56 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:34.595 21:18:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:34.595 21:18:56 -- setup/common.sh@80 -- # echo 1600321314816 00:03:34.595 21:18:56 -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:34.595 21:18:56 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:34.595 21:18:56 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:34.595 21:18:56 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:34.595 21:18:56 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:34.595 21:18:56 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:34.595 21:18:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.595 21:18:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.595 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:03:34.595 ************************************ 00:03:34.595 START TEST nvme_mount 00:03:34.595 ************************************ 00:03:34.595 21:18:56 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:34.595 21:18:56 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:34.595 21:18:56 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:34.595 21:18:56 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.595 21:18:56 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.595 21:18:56 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:34.595 21:18:56 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:34.595 21:18:56 -- setup/common.sh@40 -- # local part_no=1 00:03:34.595 21:18:56 -- setup/common.sh@41 -- # local size=1073741824 00:03:34.595 21:18:56 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:34.595 21:18:56 -- setup/common.sh@44 -- # parts=() 00:03:34.595 21:18:56 -- setup/common.sh@44 -- # local parts 00:03:34.595 21:18:56 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:34.595 21:18:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:34.595 21:18:56 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:34.595 21:18:56 -- setup/common.sh@46 -- # (( part++ )) 00:03:34.595 21:18:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:34.595 21:18:56 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:34.595 21:18:56 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:34.595 21:18:56 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:35.164 Creating new GPT entries in memory. 00:03:35.164 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:35.164 other utilities. 00:03:35.164 21:18:57 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:35.164 21:18:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:35.164 21:18:57 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:35.164 21:18:57 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:35.164 21:18:57 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:36.102 Creating new GPT entries in memory. 00:03:36.102 The operation has completed successfully. 00:03:36.361 21:18:58 -- setup/common.sh@57 -- # (( part++ )) 00:03:36.361 21:18:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:36.361 21:18:58 -- setup/common.sh@62 -- # wait 2655047 00:03:36.361 21:18:59 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.361 21:18:59 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:36.361 21:18:59 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.361 21:18:59 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:36.361 21:18:59 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:36.361 21:18:59 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.361 21:18:59 -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.361 21:18:59 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:36.361 21:18:59 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:36.361 21:18:59 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.361 21:18:59 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.361 21:18:59 -- setup/devices.sh@53 -- # local found=0 00:03:36.361 21:18:59 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:36.361 21:18:59 -- setup/devices.sh@56 -- # : 00:03:36.361 21:18:59 -- setup/devices.sh@59 -- # local pci status 00:03:36.361 21:18:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.361 21:18:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:36.361 21:18:59 -- setup/devices.sh@47 -- # setup output config 00:03:36.361 21:18:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.361 21:18:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:39.666 21:19:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.666 21:19:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.666 21:19:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.666 21:19:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.666 21:19:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.666 21:19:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.666 21:19:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.666 21:19:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.666 21:19:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.666 21:19:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.666 21:19:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.666 21:19:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.666 21:19:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.666 21:19:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.666 21:19:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.666 21:19:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.666 21:19:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.666 21:19:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.666 21:19:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.666 21:19:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.666 21:19:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.666 21:19:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.666 21:19:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.666 21:19:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.666 21:19:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.666 21:19:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.666 21:19:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.666 21:19:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.666 21:19:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.666 21:19:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.666 21:19:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.666 21:19:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.666 21:19:02 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:39.666 21:19:02 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:39.666 21:19:02 -- setup/devices.sh@63 -- # found=1 00:03:39.666 21:19:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.666 21:19:02 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:39.666 21:19:02 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:39.666 21:19:02 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.666 21:19:02 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:39.666 21:19:02 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:39.666 21:19:02 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:39.666 21:19:02 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.666 21:19:02 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.666 21:19:02 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:39.666 21:19:02 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:39.666 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:39.666 21:19:02 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:39.666 21:19:02 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:39.979 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:39.979 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:39.979 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:39.979 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:39.979 21:19:02 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:39.979 21:19:02 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:39.979 21:19:02 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.979 21:19:02 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:39.979 21:19:02 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:39.979 21:19:02 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.979 21:19:02 -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:39.979 21:19:02 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:39.979 21:19:02 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:39.979 21:19:02 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.979 21:19:02 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:39.979 21:19:02 -- setup/devices.sh@53 -- # local found=0 00:03:39.979 21:19:02 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:39.979 21:19:02 -- setup/devices.sh@56 -- # : 00:03:39.979 21:19:02 -- setup/devices.sh@59 -- # local pci status 00:03:39.979 21:19:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.979 21:19:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:39.979 21:19:02 -- setup/devices.sh@47 -- # setup output config 00:03:39.979 21:19:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.979 21:19:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.273 21:19:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.273 21:19:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.273 21:19:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.273 21:19:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.273 21:19:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.273 21:19:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.273 21:19:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.273 21:19:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.273 21:19:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.273 21:19:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.273 21:19:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.273 21:19:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.273 21:19:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.273 21:19:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.273 21:19:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.273 21:19:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.273 21:19:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.273 21:19:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.273 21:19:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.273 21:19:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.273 21:19:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.273 21:19:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.273 21:19:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.273 21:19:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.273 21:19:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.273 21:19:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.273 21:19:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.273 21:19:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.273 21:19:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.273 21:19:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.273 21:19:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.273 21:19:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.273 21:19:05 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:43.273 21:19:05 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:43.273 21:19:05 -- setup/devices.sh@63 -- # found=1 00:03:43.273 21:19:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.273 21:19:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.273 21:19:05 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:43.273 21:19:05 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.273 21:19:05 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.273 21:19:05 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.273 21:19:05 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.273 21:19:05 -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:03:43.273 21:19:05 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:43.273 21:19:05 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:43.273 21:19:05 -- setup/devices.sh@50 -- # local mount_point= 00:03:43.273 21:19:05 -- setup/devices.sh@51 -- # local test_file= 00:03:43.273 21:19:05 -- setup/devices.sh@53 -- # local found=0 00:03:43.273 21:19:05 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:43.273 21:19:05 -- setup/devices.sh@59 -- # local pci status 00:03:43.273 21:19:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.273 21:19:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:43.273 21:19:05 -- setup/devices.sh@47 -- # setup output config 00:03:43.273 21:19:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.273 21:19:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:46.572 21:19:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.572 21:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.572 21:19:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.572 21:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.572 21:19:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.572 21:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.572 21:19:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.572 21:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.572 21:19:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.572 21:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.572 21:19:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.572 21:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.572 21:19:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.572 21:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.572 21:19:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.572 21:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.572 21:19:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.572 21:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.572 21:19:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.572 21:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.572 21:19:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.572 21:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.572 21:19:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.572 21:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.572 21:19:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.572 21:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.572 21:19:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.572 21:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.572 21:19:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.572 21:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.572 21:19:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.572 21:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.572 21:19:09 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:46.572 21:19:09 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:46.572 21:19:09 -- setup/devices.sh@63 -- # found=1 00:03:46.572 21:19:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.572 21:19:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:46.572 21:19:09 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:46.572 21:19:09 -- setup/devices.sh@68 -- # return 0 00:03:46.572 21:19:09 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:46.572 21:19:09 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.572 21:19:09 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:46.572 21:19:09 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:46.572 21:19:09 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:46.572 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:46.572 00:03:46.572 real 0m12.358s 00:03:46.572 user 0m3.395s 00:03:46.572 sys 0m6.770s 00:03:46.572 21:19:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:46.572 21:19:09 -- common/autotest_common.sh@10 -- # set +x 00:03:46.572 ************************************ 00:03:46.572 END TEST nvme_mount 00:03:46.572 ************************************ 00:03:46.572 21:19:09 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:46.572 21:19:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:46.572 21:19:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:46.572 21:19:09 -- common/autotest_common.sh@10 -- # set +x 00:03:46.832 ************************************ 00:03:46.832 START TEST dm_mount 00:03:46.832 ************************************ 00:03:46.832 21:19:09 -- common/autotest_common.sh@1111 -- # dm_mount 00:03:46.832 21:19:09 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:46.832 21:19:09 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:46.832 21:19:09 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:46.832 21:19:09 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:46.832 21:19:09 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:46.832 21:19:09 -- setup/common.sh@40 -- # local part_no=2 00:03:46.832 21:19:09 -- setup/common.sh@41 -- # local size=1073741824 00:03:46.832 21:19:09 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:46.832 21:19:09 -- setup/common.sh@44 -- # parts=() 00:03:46.832 21:19:09 -- setup/common.sh@44 -- # local parts 00:03:46.832 21:19:09 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:46.832 21:19:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.832 21:19:09 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:46.832 21:19:09 -- setup/common.sh@46 -- # (( part++ )) 00:03:46.832 21:19:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.832 21:19:09 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:46.832 21:19:09 -- setup/common.sh@46 -- # (( part++ )) 00:03:46.832 21:19:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:46.832 21:19:09 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:46.832 21:19:09 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:46.832 21:19:09 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:47.772 Creating new GPT entries in memory. 00:03:47.772 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:47.772 other utilities. 00:03:47.772 21:19:10 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:47.772 21:19:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:47.772 21:19:10 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:47.772 21:19:10 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:47.772 21:19:10 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:48.713 Creating new GPT entries in memory. 00:03:48.713 The operation has completed successfully. 00:03:48.713 21:19:11 -- setup/common.sh@57 -- # (( part++ )) 00:03:48.713 21:19:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:48.713 21:19:11 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:48.713 21:19:11 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:48.713 21:19:11 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:50.096 The operation has completed successfully. 00:03:50.096 21:19:12 -- setup/common.sh@57 -- # (( part++ )) 00:03:50.096 21:19:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:50.096 21:19:12 -- setup/common.sh@62 -- # wait 2659480 00:03:50.096 21:19:12 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:50.096 21:19:12 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.096 21:19:12 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:50.096 21:19:12 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:50.096 21:19:12 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:50.096 21:19:12 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:50.096 21:19:12 -- setup/devices.sh@161 -- # break 00:03:50.096 21:19:12 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:50.096 21:19:12 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:50.096 21:19:12 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:50.096 21:19:12 -- setup/devices.sh@166 -- # dm=dm-0 00:03:50.096 21:19:12 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:50.096 21:19:12 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:50.096 21:19:12 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.096 21:19:12 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:50.096 21:19:12 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.096 21:19:12 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:50.096 21:19:12 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:50.096 21:19:12 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.096 21:19:12 -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:50.096 21:19:12 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:50.096 21:19:12 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:50.096 21:19:12 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.096 21:19:12 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:50.096 21:19:12 -- setup/devices.sh@53 -- # local found=0 00:03:50.096 21:19:12 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:50.096 21:19:12 -- setup/devices.sh@56 -- # : 00:03:50.096 21:19:12 -- setup/devices.sh@59 -- # local pci status 00:03:50.096 21:19:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.096 21:19:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:50.096 21:19:12 -- setup/devices.sh@47 -- # setup output config 00:03:50.096 21:19:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.096 21:19:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:53.390 21:19:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.390 21:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.390 21:19:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.390 21:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.390 21:19:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.390 21:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.390 21:19:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.390 21:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.390 21:19:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.390 21:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.390 21:19:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.390 21:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.390 21:19:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.390 21:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.390 21:19:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.390 21:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.390 21:19:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.390 21:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.390 21:19:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.390 21:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.390 21:19:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.390 21:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.390 21:19:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.390 21:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.390 21:19:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.390 21:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.390 21:19:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.390 21:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.390 21:19:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.390 21:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.390 21:19:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.390 21:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.390 21:19:15 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:53.390 21:19:15 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:53.390 21:19:15 -- setup/devices.sh@63 -- # found=1 00:03:53.390 21:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.390 21:19:16 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:53.390 21:19:16 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:53.390 21:19:16 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:53.390 21:19:16 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:53.390 21:19:16 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:53.390 21:19:16 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:53.390 21:19:16 -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:53.390 21:19:16 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:53.390 21:19:16 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:53.390 21:19:16 -- setup/devices.sh@50 -- # local mount_point= 00:03:53.390 21:19:16 -- setup/devices.sh@51 -- # local test_file= 00:03:53.390 21:19:16 -- setup/devices.sh@53 -- # local found=0 00:03:53.390 21:19:16 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:53.390 21:19:16 -- setup/devices.sh@59 -- # local pci status 00:03:53.390 21:19:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.390 21:19:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:53.390 21:19:16 -- setup/devices.sh@47 -- # setup output config 00:03:53.390 21:19:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.390 21:19:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:56.717 21:19:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.717 21:19:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.717 21:19:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.717 21:19:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.717 21:19:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.717 21:19:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.717 21:19:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.717 21:19:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.717 21:19:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.717 21:19:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.717 21:19:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.717 21:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.717 21:19:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.717 21:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.717 21:19:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.717 21:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.717 21:19:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.717 21:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.717 21:19:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.717 21:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.717 21:19:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.717 21:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.717 21:19:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.717 21:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.717 21:19:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.718 21:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.718 21:19:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.718 21:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.718 21:19:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.718 21:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.718 21:19:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.718 21:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.718 21:19:19 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:56.718 21:19:19 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:56.718 21:19:19 -- setup/devices.sh@63 -- # found=1 00:03:56.718 21:19:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.718 21:19:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.718 21:19:19 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:56.718 21:19:19 -- setup/devices.sh@68 -- # return 0 00:03:56.718 21:19:19 -- setup/devices.sh@187 -- # cleanup_dm 00:03:56.718 21:19:19 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:56.718 21:19:19 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:56.718 21:19:19 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:56.718 21:19:19 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:56.718 21:19:19 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:56.718 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:56.718 21:19:19 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:56.718 21:19:19 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:56.718 00:03:56.718 real 0m9.806s 00:03:56.718 user 0m2.320s 00:03:56.718 sys 0m4.506s 00:03:56.718 21:19:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:56.718 21:19:19 -- common/autotest_common.sh@10 -- # set +x 00:03:56.718 ************************************ 00:03:56.718 END TEST dm_mount 00:03:56.718 ************************************ 00:03:56.718 21:19:19 -- setup/devices.sh@1 -- # cleanup 00:03:56.718 21:19:19 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:56.718 21:19:19 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.718 21:19:19 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:56.718 21:19:19 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:56.718 21:19:19 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:56.718 21:19:19 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:56.978 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:56.978 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:56.978 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:56.978 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:56.978 21:19:19 -- setup/devices.sh@12 -- # cleanup_dm 00:03:56.978 21:19:19 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:56.978 21:19:19 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:56.978 21:19:19 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:56.978 21:19:19 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:56.978 21:19:19 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:56.978 21:19:19 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:56.978 00:03:56.978 real 0m26.600s 00:03:56.978 user 0m7.236s 00:03:56.978 sys 0m14.047s 00:03:56.978 21:19:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:56.978 21:19:19 -- common/autotest_common.sh@10 -- # set +x 00:03:56.978 ************************************ 00:03:56.978 END TEST devices 00:03:56.978 ************************************ 00:03:56.978 00:03:56.978 real 1m31.004s 00:03:56.978 user 0m26.987s 00:03:56.978 sys 0m52.044s 00:03:56.978 21:19:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:56.978 21:19:19 -- common/autotest_common.sh@10 -- # set +x 00:03:56.978 ************************************ 00:03:56.978 END TEST setup.sh 00:03:56.978 ************************************ 00:03:56.978 21:19:19 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:00.273 Hugepages 00:04:00.273 node hugesize free / total 00:04:00.273 node0 1048576kB 0 / 0 00:04:00.273 node0 2048kB 2048 / 2048 00:04:00.273 node1 1048576kB 0 / 0 00:04:00.273 node1 2048kB 0 / 0 00:04:00.273 00:04:00.273 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:00.273 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:00.273 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:00.273 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:00.273 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:00.273 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:00.273 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:00.273 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:00.273 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:00.273 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:00.273 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:00.273 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:00.273 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:00.273 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:00.273 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:00.273 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:00.273 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:00.533 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:00.533 21:19:23 -- spdk/autotest.sh@130 -- # uname -s 00:04:00.533 21:19:23 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:00.533 21:19:23 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:00.533 21:19:23 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.829 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:03.829 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:03.829 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:03.829 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:03.829 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:03.829 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:03.829 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:03.829 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:03.829 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:03.829 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:03.829 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:03.829 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:03.829 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:03.829 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:03.829 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:03.829 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:05.211 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:05.211 21:19:27 -- common/autotest_common.sh@1518 -- # sleep 1 00:04:06.151 21:19:28 -- common/autotest_common.sh@1519 -- # bdfs=() 00:04:06.151 21:19:28 -- common/autotest_common.sh@1519 -- # local bdfs 00:04:06.151 21:19:28 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:06.151 21:19:28 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:06.151 21:19:28 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:06.151 21:19:28 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:06.151 21:19:28 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.151 21:19:28 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:06.151 21:19:28 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:06.411 21:19:29 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:06.411 21:19:29 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:d8:00.0 00:04:06.411 21:19:29 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:09.715 Waiting for block devices as requested 00:04:09.715 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:09.715 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:09.715 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:09.715 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:09.715 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:09.715 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:09.715 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:09.978 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:09.978 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:09.978 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:10.241 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:10.241 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:10.241 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:10.241 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:10.510 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:10.510 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:10.510 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:10.770 21:19:33 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:10.770 21:19:33 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:10.770 21:19:33 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:04:10.770 21:19:33 -- common/autotest_common.sh@1488 -- # grep 0000:d8:00.0/nvme/nvme 00:04:10.770 21:19:33 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:10.770 21:19:33 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:10.770 21:19:33 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:10.770 21:19:33 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:04:10.770 21:19:33 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:10.770 21:19:33 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:10.770 21:19:33 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:10.770 21:19:33 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:10.770 21:19:33 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:10.770 21:19:33 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:10.770 21:19:33 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:10.770 21:19:33 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:10.770 21:19:33 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:10.770 21:19:33 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:10.771 21:19:33 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:10.771 21:19:33 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:10.771 21:19:33 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:10.771 21:19:33 -- common/autotest_common.sh@1543 -- # continue 00:04:10.771 21:19:33 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:10.771 21:19:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:10.771 21:19:33 -- common/autotest_common.sh@10 -- # set +x 00:04:10.771 21:19:33 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:10.771 21:19:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:10.771 21:19:33 -- common/autotest_common.sh@10 -- # set +x 00:04:10.771 21:19:33 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:14.063 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:14.063 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:14.063 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:14.063 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:14.063 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:14.063 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:14.063 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:14.063 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:14.063 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:14.063 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:14.063 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:14.063 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:14.063 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:14.063 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:14.063 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:14.063 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:15.972 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:15.972 21:19:38 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:15.972 21:19:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:15.972 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:04:15.972 21:19:38 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:15.972 21:19:38 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:04:15.972 21:19:38 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:04:15.972 21:19:38 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:15.972 21:19:38 -- common/autotest_common.sh@1563 -- # local bdfs 00:04:15.972 21:19:38 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:04:15.972 21:19:38 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:15.972 21:19:38 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:15.972 21:19:38 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:15.972 21:19:38 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:15.972 21:19:38 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:15.972 21:19:38 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:15.972 21:19:38 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:d8:00.0 00:04:15.972 21:19:38 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:15.972 21:19:38 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:15.972 21:19:38 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:15.972 21:19:38 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:15.972 21:19:38 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:15.972 21:19:38 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:d8:00.0 00:04:15.972 21:19:38 -- common/autotest_common.sh@1578 -- # [[ -z 0000:d8:00.0 ]] 00:04:15.972 21:19:38 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=2669010 00:04:15.972 21:19:38 -- common/autotest_common.sh@1584 -- # waitforlisten 2669010 00:04:15.972 21:19:38 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.972 21:19:38 -- common/autotest_common.sh@817 -- # '[' -z 2669010 ']' 00:04:15.972 21:19:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.972 21:19:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:15.972 21:19:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.972 21:19:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:15.972 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:04:15.972 [2024-04-24 21:19:38.690521] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:04:15.972 [2024-04-24 21:19:38.690585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669010 ] 00:04:15.972 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.972 [2024-04-24 21:19:38.762115] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.972 [2024-04-24 21:19:38.831446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.933 21:19:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:16.933 21:19:39 -- common/autotest_common.sh@850 -- # return 0 00:04:16.933 21:19:39 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:04:16.933 21:19:39 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:04:16.933 21:19:39 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:20.233 nvme0n1 00:04:20.233 21:19:42 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:20.233 [2024-04-24 21:19:42.625653] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:20.233 request: 00:04:20.233 { 00:04:20.233 "nvme_ctrlr_name": "nvme0", 00:04:20.233 "password": "test", 00:04:20.233 "method": "bdev_nvme_opal_revert", 00:04:20.233 "req_id": 1 00:04:20.233 } 00:04:20.233 Got JSON-RPC error response 00:04:20.233 response: 00:04:20.233 { 00:04:20.233 "code": -32602, 00:04:20.233 "message": "Invalid parameters" 00:04:20.233 } 00:04:20.233 21:19:42 -- common/autotest_common.sh@1590 -- # true 00:04:20.233 21:19:42 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:04:20.233 21:19:42 -- common/autotest_common.sh@1594 -- # killprocess 2669010 00:04:20.233 21:19:42 -- common/autotest_common.sh@936 -- # '[' -z 2669010 ']' 00:04:20.233 21:19:42 -- common/autotest_common.sh@940 -- # kill -0 2669010 00:04:20.233 21:19:42 -- common/autotest_common.sh@941 -- # uname 00:04:20.233 21:19:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:20.233 21:19:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2669010 00:04:20.233 21:19:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:20.233 21:19:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:20.233 21:19:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2669010' 00:04:20.233 killing process with pid 2669010 00:04:20.233 21:19:42 -- common/autotest_common.sh@955 -- # kill 2669010 00:04:20.233 21:19:42 -- common/autotest_common.sh@960 -- # wait 2669010 00:04:22.142 21:19:44 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:22.142 21:19:44 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:22.142 21:19:44 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:22.142 21:19:44 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:22.142 21:19:44 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:22.142 21:19:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:22.142 21:19:44 -- common/autotest_common.sh@10 -- # set +x 00:04:22.142 21:19:44 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:22.142 21:19:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:22.142 21:19:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.142 21:19:44 -- common/autotest_common.sh@10 -- # set +x 00:04:22.142 ************************************ 00:04:22.142 START TEST env 00:04:22.142 ************************************ 00:04:22.142 21:19:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:22.402 * Looking for test storage... 00:04:22.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:22.402 21:19:45 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:22.402 21:19:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:22.402 21:19:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.402 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:04:22.402 ************************************ 00:04:22.402 START TEST env_memory 00:04:22.402 ************************************ 00:04:22.402 21:19:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:22.402 00:04:22.402 00:04:22.402 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.402 http://cunit.sourceforge.net/ 00:04:22.402 00:04:22.402 00:04:22.402 Suite: memory 00:04:22.662 Test: alloc and free memory map ...[2024-04-24 21:19:45.311812] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:22.662 passed 00:04:22.662 Test: mem map translation ...[2024-04-24 21:19:45.330110] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:22.662 [2024-04-24 21:19:45.330125] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:22.662 [2024-04-24 21:19:45.330162] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:22.662 [2024-04-24 21:19:45.330170] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:22.662 passed 00:04:22.662 Test: mem map registration ...[2024-04-24 21:19:45.365189] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:22.662 [2024-04-24 21:19:45.365204] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:22.662 passed 00:04:22.662 Test: mem map adjacent registrations ...passed 00:04:22.662 00:04:22.662 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.662 suites 1 1 n/a 0 0 00:04:22.662 tests 4 4 4 0 0 00:04:22.662 asserts 152 152 152 0 n/a 00:04:22.662 00:04:22.662 Elapsed time = 0.130 seconds 00:04:22.662 00:04:22.662 real 0m0.144s 00:04:22.662 user 0m0.131s 00:04:22.662 sys 0m0.012s 00:04:22.662 21:19:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:22.662 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:04:22.662 ************************************ 00:04:22.662 END TEST env_memory 00:04:22.662 ************************************ 00:04:22.662 21:19:45 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:22.662 21:19:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:22.662 21:19:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.662 21:19:45 -- common/autotest_common.sh@10 -- # set +x 00:04:22.923 ************************************ 00:04:22.923 START TEST env_vtophys 00:04:22.923 ************************************ 00:04:22.923 21:19:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:22.923 EAL: lib.eal log level changed from notice to debug 00:04:22.923 EAL: Detected lcore 0 as core 0 on socket 0 00:04:22.923 EAL: Detected lcore 1 as core 1 on socket 0 00:04:22.923 EAL: Detected lcore 2 as core 2 on socket 0 00:04:22.923 EAL: Detected lcore 3 as core 3 on socket 0 00:04:22.923 EAL: Detected lcore 4 as core 4 on socket 0 00:04:22.923 EAL: Detected lcore 5 as core 5 on socket 0 00:04:22.923 EAL: Detected lcore 6 as core 6 on socket 0 00:04:22.923 EAL: Detected lcore 7 as core 8 on socket 0 00:04:22.923 EAL: Detected lcore 8 as core 9 on socket 0 00:04:22.923 EAL: Detected lcore 9 as core 10 on socket 0 00:04:22.923 EAL: Detected lcore 10 as core 11 on socket 0 00:04:22.923 EAL: Detected lcore 11 as core 12 on socket 0 00:04:22.923 EAL: Detected lcore 12 as core 13 on socket 0 00:04:22.923 EAL: Detected lcore 13 as core 14 on socket 0 00:04:22.923 EAL: Detected lcore 14 as core 16 on socket 0 00:04:22.923 EAL: Detected lcore 15 as core 17 on socket 0 00:04:22.923 EAL: Detected lcore 16 as core 18 on socket 0 00:04:22.923 EAL: Detected lcore 17 as core 19 on socket 0 00:04:22.923 EAL: Detected lcore 18 as core 20 on socket 0 00:04:22.923 EAL: Detected lcore 19 as core 21 on socket 0 00:04:22.923 EAL: Detected lcore 20 as core 22 on socket 0 00:04:22.923 EAL: Detected lcore 21 as core 24 on socket 0 00:04:22.923 EAL: Detected lcore 22 as core 25 on socket 0 00:04:22.923 EAL: Detected lcore 23 as core 26 on socket 0 00:04:22.923 EAL: Detected lcore 24 as core 27 on socket 0 00:04:22.923 EAL: Detected lcore 25 as core 28 on socket 0 00:04:22.923 EAL: Detected lcore 26 as core 29 on socket 0 00:04:22.923 EAL: Detected lcore 27 as core 30 on socket 0 00:04:22.923 EAL: Detected lcore 28 as core 0 on socket 1 00:04:22.923 EAL: Detected lcore 29 as core 1 on socket 1 00:04:22.923 EAL: Detected lcore 30 as core 2 on socket 1 00:04:22.923 EAL: Detected lcore 31 as core 3 on socket 1 00:04:22.923 EAL: Detected lcore 32 as core 4 on socket 1 00:04:22.923 EAL: Detected lcore 33 as core 5 on socket 1 00:04:22.923 EAL: Detected lcore 34 as core 6 on socket 1 00:04:22.923 EAL: Detected lcore 35 as core 8 on socket 1 00:04:22.923 EAL: Detected lcore 36 as core 9 on socket 1 00:04:22.923 EAL: Detected lcore 37 as core 10 on socket 1 00:04:22.923 EAL: Detected lcore 38 as core 11 on socket 1 00:04:22.923 EAL: Detected lcore 39 as core 12 on socket 1 00:04:22.923 EAL: Detected lcore 40 as core 13 on socket 1 00:04:22.923 EAL: Detected lcore 41 as core 14 on socket 1 00:04:22.923 EAL: Detected lcore 42 as core 16 on socket 1 00:04:22.923 EAL: Detected lcore 43 as core 17 on socket 1 00:04:22.923 EAL: Detected lcore 44 as core 18 on socket 1 00:04:22.923 EAL: Detected lcore 45 as core 19 on socket 1 00:04:22.923 EAL: Detected lcore 46 as core 20 on socket 1 00:04:22.923 EAL: Detected lcore 47 as core 21 on socket 1 00:04:22.923 EAL: Detected lcore 48 as core 22 on socket 1 00:04:22.923 EAL: Detected lcore 49 as core 24 on socket 1 00:04:22.923 EAL: Detected lcore 50 as core 25 on socket 1 00:04:22.923 EAL: Detected lcore 51 as core 26 on socket 1 00:04:22.923 EAL: Detected lcore 52 as core 27 on socket 1 00:04:22.923 EAL: Detected lcore 53 as core 28 on socket 1 00:04:22.923 EAL: Detected lcore 54 as core 29 on socket 1 00:04:22.923 EAL: Detected lcore 55 as core 30 on socket 1 00:04:22.923 EAL: Detected lcore 56 as core 0 on socket 0 00:04:22.923 EAL: Detected lcore 57 as core 1 on socket 0 00:04:22.923 EAL: Detected lcore 58 as core 2 on socket 0 00:04:22.923 EAL: Detected lcore 59 as core 3 on socket 0 00:04:22.923 EAL: Detected lcore 60 as core 4 on socket 0 00:04:22.923 EAL: Detected lcore 61 as core 5 on socket 0 00:04:22.923 EAL: Detected lcore 62 as core 6 on socket 0 00:04:22.923 EAL: Detected lcore 63 as core 8 on socket 0 00:04:22.923 EAL: Detected lcore 64 as core 9 on socket 0 00:04:22.923 EAL: Detected lcore 65 as core 10 on socket 0 00:04:22.923 EAL: Detected lcore 66 as core 11 on socket 0 00:04:22.923 EAL: Detected lcore 67 as core 12 on socket 0 00:04:22.923 EAL: Detected lcore 68 as core 13 on socket 0 00:04:22.923 EAL: Detected lcore 69 as core 14 on socket 0 00:04:22.923 EAL: Detected lcore 70 as core 16 on socket 0 00:04:22.923 EAL: Detected lcore 71 as core 17 on socket 0 00:04:22.923 EAL: Detected lcore 72 as core 18 on socket 0 00:04:22.923 EAL: Detected lcore 73 as core 19 on socket 0 00:04:22.923 EAL: Detected lcore 74 as core 20 on socket 0 00:04:22.923 EAL: Detected lcore 75 as core 21 on socket 0 00:04:22.923 EAL: Detected lcore 76 as core 22 on socket 0 00:04:22.923 EAL: Detected lcore 77 as core 24 on socket 0 00:04:22.923 EAL: Detected lcore 78 as core 25 on socket 0 00:04:22.923 EAL: Detected lcore 79 as core 26 on socket 0 00:04:22.923 EAL: Detected lcore 80 as core 27 on socket 0 00:04:22.923 EAL: Detected lcore 81 as core 28 on socket 0 00:04:22.923 EAL: Detected lcore 82 as core 29 on socket 0 00:04:22.923 EAL: Detected lcore 83 as core 30 on socket 0 00:04:22.923 EAL: Detected lcore 84 as core 0 on socket 1 00:04:22.923 EAL: Detected lcore 85 as core 1 on socket 1 00:04:22.923 EAL: Detected lcore 86 as core 2 on socket 1 00:04:22.923 EAL: Detected lcore 87 as core 3 on socket 1 00:04:22.923 EAL: Detected lcore 88 as core 4 on socket 1 00:04:22.923 EAL: Detected lcore 89 as core 5 on socket 1 00:04:22.923 EAL: Detected lcore 90 as core 6 on socket 1 00:04:22.923 EAL: Detected lcore 91 as core 8 on socket 1 00:04:22.923 EAL: Detected lcore 92 as core 9 on socket 1 00:04:22.923 EAL: Detected lcore 93 as core 10 on socket 1 00:04:22.923 EAL: Detected lcore 94 as core 11 on socket 1 00:04:22.923 EAL: Detected lcore 95 as core 12 on socket 1 00:04:22.923 EAL: Detected lcore 96 as core 13 on socket 1 00:04:22.923 EAL: Detected lcore 97 as core 14 on socket 1 00:04:22.923 EAL: Detected lcore 98 as core 16 on socket 1 00:04:22.923 EAL: Detected lcore 99 as core 17 on socket 1 00:04:22.923 EAL: Detected lcore 100 as core 18 on socket 1 00:04:22.923 EAL: Detected lcore 101 as core 19 on socket 1 00:04:22.923 EAL: Detected lcore 102 as core 20 on socket 1 00:04:22.923 EAL: Detected lcore 103 as core 21 on socket 1 00:04:22.923 EAL: Detected lcore 104 as core 22 on socket 1 00:04:22.923 EAL: Detected lcore 105 as core 24 on socket 1 00:04:22.923 EAL: Detected lcore 106 as core 25 on socket 1 00:04:22.923 EAL: Detected lcore 107 as core 26 on socket 1 00:04:22.923 EAL: Detected lcore 108 as core 27 on socket 1 00:04:22.923 EAL: Detected lcore 109 as core 28 on socket 1 00:04:22.923 EAL: Detected lcore 110 as core 29 on socket 1 00:04:22.923 EAL: Detected lcore 111 as core 30 on socket 1 00:04:22.923 EAL: Maximum logical cores by configuration: 128 00:04:22.923 EAL: Detected CPU lcores: 112 00:04:22.923 EAL: Detected NUMA nodes: 2 00:04:22.923 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:22.923 EAL: Detected shared linkage of DPDK 00:04:22.923 EAL: No shared files mode enabled, IPC will be disabled 00:04:22.923 EAL: Bus pci wants IOVA as 'DC' 00:04:22.923 EAL: Buses did not request a specific IOVA mode. 00:04:22.923 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:22.923 EAL: Selected IOVA mode 'VA' 00:04:22.924 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.924 EAL: Probing VFIO support... 00:04:22.924 EAL: IOMMU type 1 (Type 1) is supported 00:04:22.924 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:22.924 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:22.924 EAL: VFIO support initialized 00:04:22.924 EAL: Ask a virtual area of 0x2e000 bytes 00:04:22.924 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:22.924 EAL: Setting up physically contiguous memory... 00:04:22.924 EAL: Setting maximum number of open files to 524288 00:04:22.924 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:22.924 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:22.924 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:22.924 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.924 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:22.924 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:22.924 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.924 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:22.924 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:22.924 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.924 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:22.924 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:22.924 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.924 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:22.924 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:22.924 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.924 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:22.924 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:22.924 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.924 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:22.924 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:22.924 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.924 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:22.924 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:22.924 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.924 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:22.924 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:22.924 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:22.924 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.924 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:22.924 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:22.924 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.924 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:22.924 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:22.924 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.924 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:22.924 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:22.924 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.924 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:22.924 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:22.924 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.924 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:22.924 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:22.924 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.924 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:22.924 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:22.924 EAL: Ask a virtual area of 0x61000 bytes 00:04:22.924 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:22.924 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:22.924 EAL: Ask a virtual area of 0x400000000 bytes 00:04:22.924 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:22.924 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:22.924 EAL: Hugepages will be freed exactly as allocated. 00:04:22.924 EAL: No shared files mode enabled, IPC is disabled 00:04:22.924 EAL: No shared files mode enabled, IPC is disabled 00:04:22.924 EAL: TSC frequency is ~2500000 KHz 00:04:22.924 EAL: Main lcore 0 is ready (tid=7f1e174daa00;cpuset=[0]) 00:04:22.924 EAL: Trying to obtain current memory policy. 00:04:22.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.924 EAL: Restoring previous memory policy: 0 00:04:22.924 EAL: request: mp_malloc_sync 00:04:22.924 EAL: No shared files mode enabled, IPC is disabled 00:04:22.924 EAL: Heap on socket 0 was expanded by 2MB 00:04:22.924 EAL: No shared files mode enabled, IPC is disabled 00:04:22.924 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:22.924 EAL: Mem event callback 'spdk:(nil)' registered 00:04:22.924 00:04:22.924 00:04:22.924 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.924 http://cunit.sourceforge.net/ 00:04:22.924 00:04:22.924 00:04:22.924 Suite: components_suite 00:04:22.924 Test: vtophys_malloc_test ...passed 00:04:22.924 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:22.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.924 EAL: Restoring previous memory policy: 4 00:04:22.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.924 EAL: request: mp_malloc_sync 00:04:22.924 EAL: No shared files mode enabled, IPC is disabled 00:04:22.924 EAL: Heap on socket 0 was expanded by 4MB 00:04:22.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.924 EAL: request: mp_malloc_sync 00:04:22.924 EAL: No shared files mode enabled, IPC is disabled 00:04:22.924 EAL: Heap on socket 0 was shrunk by 4MB 00:04:22.924 EAL: Trying to obtain current memory policy. 00:04:22.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.924 EAL: Restoring previous memory policy: 4 00:04:22.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.924 EAL: request: mp_malloc_sync 00:04:22.924 EAL: No shared files mode enabled, IPC is disabled 00:04:22.924 EAL: Heap on socket 0 was expanded by 6MB 00:04:22.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.924 EAL: request: mp_malloc_sync 00:04:22.924 EAL: No shared files mode enabled, IPC is disabled 00:04:22.924 EAL: Heap on socket 0 was shrunk by 6MB 00:04:22.924 EAL: Trying to obtain current memory policy. 00:04:22.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.924 EAL: Restoring previous memory policy: 4 00:04:22.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.924 EAL: request: mp_malloc_sync 00:04:22.924 EAL: No shared files mode enabled, IPC is disabled 00:04:22.924 EAL: Heap on socket 0 was expanded by 10MB 00:04:22.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.924 EAL: request: mp_malloc_sync 00:04:22.924 EAL: No shared files mode enabled, IPC is disabled 00:04:22.924 EAL: Heap on socket 0 was shrunk by 10MB 00:04:22.924 EAL: Trying to obtain current memory policy. 00:04:22.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.924 EAL: Restoring previous memory policy: 4 00:04:22.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.924 EAL: request: mp_malloc_sync 00:04:22.924 EAL: No shared files mode enabled, IPC is disabled 00:04:22.924 EAL: Heap on socket 0 was expanded by 18MB 00:04:22.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.924 EAL: request: mp_malloc_sync 00:04:22.924 EAL: No shared files mode enabled, IPC is disabled 00:04:22.924 EAL: Heap on socket 0 was shrunk by 18MB 00:04:22.924 EAL: Trying to obtain current memory policy. 00:04:22.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.924 EAL: Restoring previous memory policy: 4 00:04:22.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.924 EAL: request: mp_malloc_sync 00:04:22.924 EAL: No shared files mode enabled, IPC is disabled 00:04:22.924 EAL: Heap on socket 0 was expanded by 34MB 00:04:22.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.924 EAL: request: mp_malloc_sync 00:04:22.924 EAL: No shared files mode enabled, IPC is disabled 00:04:22.924 EAL: Heap on socket 0 was shrunk by 34MB 00:04:22.924 EAL: Trying to obtain current memory policy. 00:04:22.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.924 EAL: Restoring previous memory policy: 4 00:04:22.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.924 EAL: request: mp_malloc_sync 00:04:22.924 EAL: No shared files mode enabled, IPC is disabled 00:04:22.924 EAL: Heap on socket 0 was expanded by 66MB 00:04:22.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.924 EAL: request: mp_malloc_sync 00:04:22.924 EAL: No shared files mode enabled, IPC is disabled 00:04:22.924 EAL: Heap on socket 0 was shrunk by 66MB 00:04:22.924 EAL: Trying to obtain current memory policy. 00:04:22.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.924 EAL: Restoring previous memory policy: 4 00:04:22.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.924 EAL: request: mp_malloc_sync 00:04:22.924 EAL: No shared files mode enabled, IPC is disabled 00:04:22.924 EAL: Heap on socket 0 was expanded by 130MB 00:04:22.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.184 EAL: request: mp_malloc_sync 00:04:23.184 EAL: No shared files mode enabled, IPC is disabled 00:04:23.184 EAL: Heap on socket 0 was shrunk by 130MB 00:04:23.184 EAL: Trying to obtain current memory policy. 00:04:23.184 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.184 EAL: Restoring previous memory policy: 4 00:04:23.184 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.184 EAL: request: mp_malloc_sync 00:04:23.184 EAL: No shared files mode enabled, IPC is disabled 00:04:23.184 EAL: Heap on socket 0 was expanded by 258MB 00:04:23.184 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.184 EAL: request: mp_malloc_sync 00:04:23.184 EAL: No shared files mode enabled, IPC is disabled 00:04:23.184 EAL: Heap on socket 0 was shrunk by 258MB 00:04:23.184 EAL: Trying to obtain current memory policy. 00:04:23.184 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.184 EAL: Restoring previous memory policy: 4 00:04:23.184 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.184 EAL: request: mp_malloc_sync 00:04:23.184 EAL: No shared files mode enabled, IPC is disabled 00:04:23.184 EAL: Heap on socket 0 was expanded by 514MB 00:04:23.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.447 EAL: request: mp_malloc_sync 00:04:23.447 EAL: No shared files mode enabled, IPC is disabled 00:04:23.447 EAL: Heap on socket 0 was shrunk by 514MB 00:04:23.447 EAL: Trying to obtain current memory policy. 00:04:23.447 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.740 EAL: Restoring previous memory policy: 4 00:04:23.740 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.740 EAL: request: mp_malloc_sync 00:04:23.740 EAL: No shared files mode enabled, IPC is disabled 00:04:23.740 EAL: Heap on socket 0 was expanded by 1026MB 00:04:23.740 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.999 EAL: request: mp_malloc_sync 00:04:23.999 EAL: No shared files mode enabled, IPC is disabled 00:04:23.999 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:23.999 passed 00:04:23.999 00:04:23.999 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.999 suites 1 1 n/a 0 0 00:04:23.999 tests 2 2 2 0 0 00:04:23.999 asserts 497 497 497 0 n/a 00:04:23.999 00:04:23.999 Elapsed time = 0.967 seconds 00:04:23.999 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.999 EAL: request: mp_malloc_sync 00:04:23.999 EAL: No shared files mode enabled, IPC is disabled 00:04:23.999 EAL: Heap on socket 0 was shrunk by 2MB 00:04:23.999 EAL: No shared files mode enabled, IPC is disabled 00:04:23.999 EAL: No shared files mode enabled, IPC is disabled 00:04:24.000 EAL: No shared files mode enabled, IPC is disabled 00:04:24.000 00:04:24.000 real 0m1.109s 00:04:24.000 user 0m0.637s 00:04:24.000 sys 0m0.432s 00:04:24.000 21:19:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:24.000 21:19:46 -- common/autotest_common.sh@10 -- # set +x 00:04:24.000 ************************************ 00:04:24.000 END TEST env_vtophys 00:04:24.000 ************************************ 00:04:24.000 21:19:46 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:24.000 21:19:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.000 21:19:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.000 21:19:46 -- common/autotest_common.sh@10 -- # set +x 00:04:24.258 ************************************ 00:04:24.258 START TEST env_pci 00:04:24.258 ************************************ 00:04:24.258 21:19:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:24.258 00:04:24.258 00:04:24.258 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.258 http://cunit.sourceforge.net/ 00:04:24.258 00:04:24.258 00:04:24.258 Suite: pci 00:04:24.258 Test: pci_hook ...[2024-04-24 21:19:46.948489] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2670580 has claimed it 00:04:24.258 EAL: Cannot find device (10000:00:01.0) 00:04:24.258 EAL: Failed to attach device on primary process 00:04:24.258 passed 00:04:24.258 00:04:24.258 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.258 suites 1 1 n/a 0 0 00:04:24.258 tests 1 1 1 0 0 00:04:24.258 asserts 25 25 25 0 n/a 00:04:24.258 00:04:24.258 Elapsed time = 0.034 seconds 00:04:24.258 00:04:24.258 real 0m0.057s 00:04:24.258 user 0m0.008s 00:04:24.258 sys 0m0.048s 00:04:24.258 21:19:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:24.258 21:19:46 -- common/autotest_common.sh@10 -- # set +x 00:04:24.258 ************************************ 00:04:24.258 END TEST env_pci 00:04:24.258 ************************************ 00:04:24.258 21:19:47 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:24.258 21:19:47 -- env/env.sh@15 -- # uname 00:04:24.258 21:19:47 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:24.258 21:19:47 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:24.258 21:19:47 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:24.258 21:19:47 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:24.258 21:19:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.258 21:19:47 -- common/autotest_common.sh@10 -- # set +x 00:04:24.517 ************************************ 00:04:24.517 START TEST env_dpdk_post_init 00:04:24.517 ************************************ 00:04:24.517 21:19:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:24.517 EAL: Detected CPU lcores: 112 00:04:24.517 EAL: Detected NUMA nodes: 2 00:04:24.517 EAL: Detected shared linkage of DPDK 00:04:24.517 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:24.517 EAL: Selected IOVA mode 'VA' 00:04:24.517 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.517 EAL: VFIO support initialized 00:04:24.517 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:24.517 EAL: Using IOMMU type 1 (Type 1) 00:04:24.517 EAL: Ignore mapping IO port bar(1) 00:04:24.517 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:24.517 EAL: Ignore mapping IO port bar(1) 00:04:24.517 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:24.517 EAL: Ignore mapping IO port bar(1) 00:04:24.517 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:24.517 EAL: Ignore mapping IO port bar(1) 00:04:24.517 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:24.517 EAL: Ignore mapping IO port bar(1) 00:04:24.517 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:24.517 EAL: Ignore mapping IO port bar(1) 00:04:24.517 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:24.517 EAL: Ignore mapping IO port bar(1) 00:04:24.517 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:24.517 EAL: Ignore mapping IO port bar(1) 00:04:24.517 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:24.777 EAL: Ignore mapping IO port bar(1) 00:04:24.777 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:24.777 EAL: Ignore mapping IO port bar(1) 00:04:24.777 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:24.777 EAL: Ignore mapping IO port bar(1) 00:04:24.777 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:24.777 EAL: Ignore mapping IO port bar(1) 00:04:24.777 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:24.777 EAL: Ignore mapping IO port bar(1) 00:04:24.777 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:24.777 EAL: Ignore mapping IO port bar(1) 00:04:24.777 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:24.777 EAL: Ignore mapping IO port bar(1) 00:04:24.777 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:24.777 EAL: Ignore mapping IO port bar(1) 00:04:24.777 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:25.714 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:28.999 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:28.999 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:04:29.258 Starting DPDK initialization... 00:04:29.258 Starting SPDK post initialization... 00:04:29.258 SPDK NVMe probe 00:04:29.258 Attaching to 0000:d8:00.0 00:04:29.258 Attached to 0000:d8:00.0 00:04:29.258 Cleaning up... 00:04:29.258 00:04:29.258 real 0m4.965s 00:04:29.258 user 0m3.639s 00:04:29.258 sys 0m0.382s 00:04:29.258 21:19:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:29.258 21:19:52 -- common/autotest_common.sh@10 -- # set +x 00:04:29.258 ************************************ 00:04:29.258 END TEST env_dpdk_post_init 00:04:29.258 ************************************ 00:04:29.517 21:19:52 -- env/env.sh@26 -- # uname 00:04:29.517 21:19:52 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:29.517 21:19:52 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:29.517 21:19:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.517 21:19:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.518 21:19:52 -- common/autotest_common.sh@10 -- # set +x 00:04:29.518 ************************************ 00:04:29.518 START TEST env_mem_callbacks 00:04:29.518 ************************************ 00:04:29.518 21:19:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:29.518 EAL: Detected CPU lcores: 112 00:04:29.518 EAL: Detected NUMA nodes: 2 00:04:29.518 EAL: Detected shared linkage of DPDK 00:04:29.518 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:29.518 EAL: Selected IOVA mode 'VA' 00:04:29.518 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.518 EAL: VFIO support initialized 00:04:29.518 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:29.518 00:04:29.518 00:04:29.518 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.518 http://cunit.sourceforge.net/ 00:04:29.518 00:04:29.518 00:04:29.518 Suite: memory 00:04:29.518 Test: test ... 00:04:29.518 register 0x200000200000 2097152 00:04:29.518 malloc 3145728 00:04:29.518 register 0x200000400000 4194304 00:04:29.518 buf 0x200000500000 len 3145728 PASSED 00:04:29.518 malloc 64 00:04:29.518 buf 0x2000004fff40 len 64 PASSED 00:04:29.518 malloc 4194304 00:04:29.518 register 0x200000800000 6291456 00:04:29.518 buf 0x200000a00000 len 4194304 PASSED 00:04:29.518 free 0x200000500000 3145728 00:04:29.518 free 0x2000004fff40 64 00:04:29.518 unregister 0x200000400000 4194304 PASSED 00:04:29.518 free 0x200000a00000 4194304 00:04:29.518 unregister 0x200000800000 6291456 PASSED 00:04:29.518 malloc 8388608 00:04:29.518 register 0x200000400000 10485760 00:04:29.518 buf 0x200000600000 len 8388608 PASSED 00:04:29.518 free 0x200000600000 8388608 00:04:29.518 unregister 0x200000400000 10485760 PASSED 00:04:29.518 passed 00:04:29.518 00:04:29.518 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.518 suites 1 1 n/a 0 0 00:04:29.518 tests 1 1 1 0 0 00:04:29.518 asserts 15 15 15 0 n/a 00:04:29.518 00:04:29.518 Elapsed time = 0.005 seconds 00:04:29.518 00:04:29.518 real 0m0.050s 00:04:29.518 user 0m0.013s 00:04:29.518 sys 0m0.037s 00:04:29.518 21:19:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:29.518 21:19:52 -- common/autotest_common.sh@10 -- # set +x 00:04:29.518 ************************************ 00:04:29.518 END TEST env_mem_callbacks 00:04:29.518 ************************************ 00:04:29.777 00:04:29.777 real 0m7.412s 00:04:29.777 user 0m4.788s 00:04:29.777 sys 0m1.550s 00:04:29.777 21:19:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:29.777 21:19:52 -- common/autotest_common.sh@10 -- # set +x 00:04:29.777 ************************************ 00:04:29.777 END TEST env 00:04:29.777 ************************************ 00:04:29.777 21:19:52 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:29.777 21:19:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.777 21:19:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.777 21:19:52 -- common/autotest_common.sh@10 -- # set +x 00:04:29.777 ************************************ 00:04:29.777 START TEST rpc 00:04:29.777 ************************************ 00:04:29.777 21:19:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:30.037 * Looking for test storage... 00:04:30.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:30.037 21:19:52 -- rpc/rpc.sh@65 -- # spdk_pid=2671763 00:04:30.037 21:19:52 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.037 21:19:52 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:30.037 21:19:52 -- rpc/rpc.sh@67 -- # waitforlisten 2671763 00:04:30.037 21:19:52 -- common/autotest_common.sh@817 -- # '[' -z 2671763 ']' 00:04:30.037 21:19:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.037 21:19:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:30.037 21:19:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.037 21:19:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:30.037 21:19:52 -- common/autotest_common.sh@10 -- # set +x 00:04:30.037 [2024-04-24 21:19:52.779550] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:04:30.037 [2024-04-24 21:19:52.779597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671763 ] 00:04:30.037 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.037 [2024-04-24 21:19:52.849358] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.037 [2024-04-24 21:19:52.922301] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:30.037 [2024-04-24 21:19:52.922338] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2671763' to capture a snapshot of events at runtime. 00:04:30.037 [2024-04-24 21:19:52.922347] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:30.037 [2024-04-24 21:19:52.922356] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:30.037 [2024-04-24 21:19:52.922363] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2671763 for offline analysis/debug. 00:04:30.037 [2024-04-24 21:19:52.922387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.974 21:19:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:30.974 21:19:53 -- common/autotest_common.sh@850 -- # return 0 00:04:30.974 21:19:53 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:30.974 21:19:53 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:30.974 21:19:53 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:30.974 21:19:53 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:30.974 21:19:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:30.974 21:19:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.974 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:30.974 ************************************ 00:04:30.974 START TEST rpc_integrity 00:04:30.974 ************************************ 00:04:30.974 21:19:53 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:30.974 21:19:53 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:30.974 21:19:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:30.974 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:30.974 21:19:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:30.974 21:19:53 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:30.974 21:19:53 -- rpc/rpc.sh@13 -- # jq length 00:04:30.974 21:19:53 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:30.974 21:19:53 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:30.974 21:19:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:30.974 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:30.974 21:19:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:30.974 21:19:53 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:30.974 21:19:53 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:30.974 21:19:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:30.974 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:30.974 21:19:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:30.974 21:19:53 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:30.974 { 00:04:30.974 "name": "Malloc0", 00:04:30.974 "aliases": [ 00:04:30.974 "f557791a-ff65-41d3-9bd6-502d1b7da8ad" 00:04:30.974 ], 00:04:30.974 "product_name": "Malloc disk", 00:04:30.974 "block_size": 512, 00:04:30.974 "num_blocks": 16384, 00:04:30.974 "uuid": "f557791a-ff65-41d3-9bd6-502d1b7da8ad", 00:04:30.974 "assigned_rate_limits": { 00:04:30.974 "rw_ios_per_sec": 0, 00:04:30.974 "rw_mbytes_per_sec": 0, 00:04:30.974 "r_mbytes_per_sec": 0, 00:04:30.974 "w_mbytes_per_sec": 0 00:04:30.974 }, 00:04:30.974 "claimed": false, 00:04:30.974 "zoned": false, 00:04:30.974 "supported_io_types": { 00:04:30.974 "read": true, 00:04:30.974 "write": true, 00:04:30.974 "unmap": true, 00:04:30.974 "write_zeroes": true, 00:04:30.974 "flush": true, 00:04:30.974 "reset": true, 00:04:30.974 "compare": false, 00:04:30.974 "compare_and_write": false, 00:04:30.974 "abort": true, 00:04:30.974 "nvme_admin": false, 00:04:30.974 "nvme_io": false 00:04:30.974 }, 00:04:30.974 "memory_domains": [ 00:04:30.974 { 00:04:30.974 "dma_device_id": "system", 00:04:30.974 "dma_device_type": 1 00:04:30.974 }, 00:04:30.974 { 00:04:30.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.974 "dma_device_type": 2 00:04:30.974 } 00:04:30.974 ], 00:04:30.974 "driver_specific": {} 00:04:30.974 } 00:04:30.974 ]' 00:04:30.974 21:19:53 -- rpc/rpc.sh@17 -- # jq length 00:04:30.974 21:19:53 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:30.974 21:19:53 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:30.974 21:19:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:30.974 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:31.232 [2024-04-24 21:19:53.864886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:31.232 [2024-04-24 21:19:53.864916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:31.232 [2024-04-24 21:19:53.864929] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe0e410 00:04:31.232 [2024-04-24 21:19:53.864938] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:31.232 [2024-04-24 21:19:53.866004] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:31.232 [2024-04-24 21:19:53.866026] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:31.232 Passthru0 00:04:31.232 21:19:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:31.232 21:19:53 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:31.232 21:19:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:31.232 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:31.232 21:19:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:31.232 21:19:53 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:31.232 { 00:04:31.232 "name": "Malloc0", 00:04:31.232 "aliases": [ 00:04:31.232 "f557791a-ff65-41d3-9bd6-502d1b7da8ad" 00:04:31.232 ], 00:04:31.232 "product_name": "Malloc disk", 00:04:31.232 "block_size": 512, 00:04:31.232 "num_blocks": 16384, 00:04:31.232 "uuid": "f557791a-ff65-41d3-9bd6-502d1b7da8ad", 00:04:31.232 "assigned_rate_limits": { 00:04:31.232 "rw_ios_per_sec": 0, 00:04:31.232 "rw_mbytes_per_sec": 0, 00:04:31.232 "r_mbytes_per_sec": 0, 00:04:31.232 "w_mbytes_per_sec": 0 00:04:31.232 }, 00:04:31.232 "claimed": true, 00:04:31.232 "claim_type": "exclusive_write", 00:04:31.232 "zoned": false, 00:04:31.232 "supported_io_types": { 00:04:31.232 "read": true, 00:04:31.232 "write": true, 00:04:31.232 "unmap": true, 00:04:31.232 "write_zeroes": true, 00:04:31.232 "flush": true, 00:04:31.232 "reset": true, 00:04:31.232 "compare": false, 00:04:31.232 "compare_and_write": false, 00:04:31.232 "abort": true, 00:04:31.232 "nvme_admin": false, 00:04:31.232 "nvme_io": false 00:04:31.232 }, 00:04:31.232 "memory_domains": [ 00:04:31.232 { 00:04:31.232 "dma_device_id": "system", 00:04:31.232 "dma_device_type": 1 00:04:31.232 }, 00:04:31.232 { 00:04:31.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.232 "dma_device_type": 2 00:04:31.232 } 00:04:31.232 ], 00:04:31.232 "driver_specific": {} 00:04:31.232 }, 00:04:31.232 { 00:04:31.232 "name": "Passthru0", 00:04:31.232 "aliases": [ 00:04:31.232 "fd51b77a-cd24-5b0d-8579-c34054c00450" 00:04:31.232 ], 00:04:31.232 "product_name": "passthru", 00:04:31.232 "block_size": 512, 00:04:31.232 "num_blocks": 16384, 00:04:31.232 "uuid": "fd51b77a-cd24-5b0d-8579-c34054c00450", 00:04:31.233 "assigned_rate_limits": { 00:04:31.233 "rw_ios_per_sec": 0, 00:04:31.233 "rw_mbytes_per_sec": 0, 00:04:31.233 "r_mbytes_per_sec": 0, 00:04:31.233 "w_mbytes_per_sec": 0 00:04:31.233 }, 00:04:31.233 "claimed": false, 00:04:31.233 "zoned": false, 00:04:31.233 "supported_io_types": { 00:04:31.233 "read": true, 00:04:31.233 "write": true, 00:04:31.233 "unmap": true, 00:04:31.233 "write_zeroes": true, 00:04:31.233 "flush": true, 00:04:31.233 "reset": true, 00:04:31.233 "compare": false, 00:04:31.233 "compare_and_write": false, 00:04:31.233 "abort": true, 00:04:31.233 "nvme_admin": false, 00:04:31.233 "nvme_io": false 00:04:31.233 }, 00:04:31.233 "memory_domains": [ 00:04:31.233 { 00:04:31.233 "dma_device_id": "system", 00:04:31.233 "dma_device_type": 1 00:04:31.233 }, 00:04:31.233 { 00:04:31.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.233 "dma_device_type": 2 00:04:31.233 } 00:04:31.233 ], 00:04:31.233 "driver_specific": { 00:04:31.233 "passthru": { 00:04:31.233 "name": "Passthru0", 00:04:31.233 "base_bdev_name": "Malloc0" 00:04:31.233 } 00:04:31.233 } 00:04:31.233 } 00:04:31.233 ]' 00:04:31.233 21:19:53 -- rpc/rpc.sh@21 -- # jq length 00:04:31.233 21:19:53 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:31.233 21:19:53 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:31.233 21:19:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:31.233 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:31.233 21:19:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:31.233 21:19:53 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:31.233 21:19:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:31.233 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:31.233 21:19:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:31.233 21:19:53 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:31.233 21:19:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:31.233 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:04:31.233 21:19:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:31.233 21:19:53 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:31.233 21:19:53 -- rpc/rpc.sh@26 -- # jq length 00:04:31.233 21:19:54 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:31.233 00:04:31.233 real 0m0.290s 00:04:31.233 user 0m0.177s 00:04:31.233 sys 0m0.051s 00:04:31.233 21:19:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:31.233 21:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:31.233 ************************************ 00:04:31.233 END TEST rpc_integrity 00:04:31.233 ************************************ 00:04:31.233 21:19:54 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:31.233 21:19:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:31.233 21:19:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:31.233 21:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:31.491 ************************************ 00:04:31.491 START TEST rpc_plugins 00:04:31.491 ************************************ 00:04:31.491 21:19:54 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:31.491 21:19:54 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:31.491 21:19:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:31.491 21:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:31.491 21:19:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:31.491 21:19:54 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:31.491 21:19:54 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:31.491 21:19:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:31.491 21:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:31.491 21:19:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:31.491 21:19:54 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:31.491 { 00:04:31.491 "name": "Malloc1", 00:04:31.491 "aliases": [ 00:04:31.491 "bc91b390-4f57-413c-9fcd-df453bc2eb77" 00:04:31.491 ], 00:04:31.491 "product_name": "Malloc disk", 00:04:31.491 "block_size": 4096, 00:04:31.491 "num_blocks": 256, 00:04:31.491 "uuid": "bc91b390-4f57-413c-9fcd-df453bc2eb77", 00:04:31.491 "assigned_rate_limits": { 00:04:31.491 "rw_ios_per_sec": 0, 00:04:31.491 "rw_mbytes_per_sec": 0, 00:04:31.491 "r_mbytes_per_sec": 0, 00:04:31.491 "w_mbytes_per_sec": 0 00:04:31.491 }, 00:04:31.491 "claimed": false, 00:04:31.491 "zoned": false, 00:04:31.491 "supported_io_types": { 00:04:31.491 "read": true, 00:04:31.491 "write": true, 00:04:31.491 "unmap": true, 00:04:31.491 "write_zeroes": true, 00:04:31.491 "flush": true, 00:04:31.491 "reset": true, 00:04:31.491 "compare": false, 00:04:31.491 "compare_and_write": false, 00:04:31.491 "abort": true, 00:04:31.491 "nvme_admin": false, 00:04:31.491 "nvme_io": false 00:04:31.491 }, 00:04:31.491 "memory_domains": [ 00:04:31.491 { 00:04:31.491 "dma_device_id": "system", 00:04:31.491 "dma_device_type": 1 00:04:31.491 }, 00:04:31.491 { 00:04:31.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.491 "dma_device_type": 2 00:04:31.491 } 00:04:31.491 ], 00:04:31.491 "driver_specific": {} 00:04:31.491 } 00:04:31.491 ]' 00:04:31.491 21:19:54 -- rpc/rpc.sh@32 -- # jq length 00:04:31.491 21:19:54 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:31.491 21:19:54 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:31.491 21:19:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:31.491 21:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:31.491 21:19:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:31.491 21:19:54 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:31.491 21:19:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:31.491 21:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:31.491 21:19:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:31.491 21:19:54 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:31.491 21:19:54 -- rpc/rpc.sh@36 -- # jq length 00:04:31.491 21:19:54 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:31.491 00:04:31.491 real 0m0.140s 00:04:31.491 user 0m0.083s 00:04:31.491 sys 0m0.025s 00:04:31.491 21:19:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:31.491 21:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:31.491 ************************************ 00:04:31.491 END TEST rpc_plugins 00:04:31.491 ************************************ 00:04:31.750 21:19:54 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:31.750 21:19:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:31.750 21:19:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:31.750 21:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:31.750 ************************************ 00:04:31.750 START TEST rpc_trace_cmd_test 00:04:31.750 ************************************ 00:04:31.750 21:19:54 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:31.750 21:19:54 -- rpc/rpc.sh@40 -- # local info 00:04:31.750 21:19:54 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:31.750 21:19:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:31.750 21:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:31.750 21:19:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:31.750 21:19:54 -- rpc/rpc.sh@42 -- # info='{ 00:04:31.750 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2671763", 00:04:31.750 "tpoint_group_mask": "0x8", 00:04:31.750 "iscsi_conn": { 00:04:31.750 "mask": "0x2", 00:04:31.750 "tpoint_mask": "0x0" 00:04:31.750 }, 00:04:31.750 "scsi": { 00:04:31.750 "mask": "0x4", 00:04:31.750 "tpoint_mask": "0x0" 00:04:31.750 }, 00:04:31.750 "bdev": { 00:04:31.750 "mask": "0x8", 00:04:31.750 "tpoint_mask": "0xffffffffffffffff" 00:04:31.750 }, 00:04:31.750 "nvmf_rdma": { 00:04:31.750 "mask": "0x10", 00:04:31.750 "tpoint_mask": "0x0" 00:04:31.750 }, 00:04:31.750 "nvmf_tcp": { 00:04:31.750 "mask": "0x20", 00:04:31.751 "tpoint_mask": "0x0" 00:04:31.751 }, 00:04:31.751 "ftl": { 00:04:31.751 "mask": "0x40", 00:04:31.751 "tpoint_mask": "0x0" 00:04:31.751 }, 00:04:31.751 "blobfs": { 00:04:31.751 "mask": "0x80", 00:04:31.751 "tpoint_mask": "0x0" 00:04:31.751 }, 00:04:31.751 "dsa": { 00:04:31.751 "mask": "0x200", 00:04:31.751 "tpoint_mask": "0x0" 00:04:31.751 }, 00:04:31.751 "thread": { 00:04:31.751 "mask": "0x400", 00:04:31.751 "tpoint_mask": "0x0" 00:04:31.751 }, 00:04:31.751 "nvme_pcie": { 00:04:31.751 "mask": "0x800", 00:04:31.751 "tpoint_mask": "0x0" 00:04:31.751 }, 00:04:31.751 "iaa": { 00:04:31.751 "mask": "0x1000", 00:04:31.751 "tpoint_mask": "0x0" 00:04:31.751 }, 00:04:31.751 "nvme_tcp": { 00:04:31.751 "mask": "0x2000", 00:04:31.751 "tpoint_mask": "0x0" 00:04:31.751 }, 00:04:31.751 "bdev_nvme": { 00:04:31.751 "mask": "0x4000", 00:04:31.751 "tpoint_mask": "0x0" 00:04:31.751 }, 00:04:31.751 "sock": { 00:04:31.751 "mask": "0x8000", 00:04:31.751 "tpoint_mask": "0x0" 00:04:31.751 } 00:04:31.751 }' 00:04:31.751 21:19:54 -- rpc/rpc.sh@43 -- # jq length 00:04:31.751 21:19:54 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:31.751 21:19:54 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:32.009 21:19:54 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:32.009 21:19:54 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:32.009 21:19:54 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:32.009 21:19:54 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:32.009 21:19:54 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:32.009 21:19:54 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:32.009 21:19:54 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:32.009 00:04:32.009 real 0m0.221s 00:04:32.009 user 0m0.180s 00:04:32.009 sys 0m0.036s 00:04:32.009 21:19:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:32.009 21:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:32.009 ************************************ 00:04:32.009 END TEST rpc_trace_cmd_test 00:04:32.009 ************************************ 00:04:32.009 21:19:54 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:32.009 21:19:54 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:32.009 21:19:54 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:32.009 21:19:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:32.009 21:19:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:32.009 21:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:32.268 ************************************ 00:04:32.268 START TEST rpc_daemon_integrity 00:04:32.268 ************************************ 00:04:32.268 21:19:54 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:32.268 21:19:54 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:32.268 21:19:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:32.268 21:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:32.268 21:19:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:32.268 21:19:54 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:32.268 21:19:54 -- rpc/rpc.sh@13 -- # jq length 00:04:32.268 21:19:55 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:32.268 21:19:55 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:32.268 21:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:32.268 21:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:32.268 21:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:32.268 21:19:55 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:32.268 21:19:55 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:32.268 21:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:32.268 21:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:32.269 21:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:32.269 21:19:55 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:32.269 { 00:04:32.269 "name": "Malloc2", 00:04:32.269 "aliases": [ 00:04:32.269 "0758513f-f779-4a3c-85f2-2503e1f1317d" 00:04:32.269 ], 00:04:32.269 "product_name": "Malloc disk", 00:04:32.269 "block_size": 512, 00:04:32.269 "num_blocks": 16384, 00:04:32.269 "uuid": "0758513f-f779-4a3c-85f2-2503e1f1317d", 00:04:32.269 "assigned_rate_limits": { 00:04:32.269 "rw_ios_per_sec": 0, 00:04:32.269 "rw_mbytes_per_sec": 0, 00:04:32.269 "r_mbytes_per_sec": 0, 00:04:32.269 "w_mbytes_per_sec": 0 00:04:32.269 }, 00:04:32.269 "claimed": false, 00:04:32.269 "zoned": false, 00:04:32.269 "supported_io_types": { 00:04:32.269 "read": true, 00:04:32.269 "write": true, 00:04:32.269 "unmap": true, 00:04:32.269 "write_zeroes": true, 00:04:32.269 "flush": true, 00:04:32.269 "reset": true, 00:04:32.269 "compare": false, 00:04:32.269 "compare_and_write": false, 00:04:32.269 "abort": true, 00:04:32.269 "nvme_admin": false, 00:04:32.269 "nvme_io": false 00:04:32.269 }, 00:04:32.269 "memory_domains": [ 00:04:32.269 { 00:04:32.269 "dma_device_id": "system", 00:04:32.269 "dma_device_type": 1 00:04:32.269 }, 00:04:32.269 { 00:04:32.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.269 "dma_device_type": 2 00:04:32.269 } 00:04:32.269 ], 00:04:32.269 "driver_specific": {} 00:04:32.269 } 00:04:32.269 ]' 00:04:32.269 21:19:55 -- rpc/rpc.sh@17 -- # jq length 00:04:32.269 21:19:55 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:32.269 21:19:55 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:32.269 21:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:32.269 21:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:32.269 [2024-04-24 21:19:55.116266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:32.269 [2024-04-24 21:19:55.116294] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:32.269 [2024-04-24 21:19:55.116307] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfa5b20 00:04:32.269 [2024-04-24 21:19:55.116316] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:32.269 [2024-04-24 21:19:55.117230] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:32.269 [2024-04-24 21:19:55.117252] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:32.269 Passthru0 00:04:32.269 21:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:32.269 21:19:55 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:32.269 21:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:32.269 21:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:32.269 21:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:32.269 21:19:55 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:32.269 { 00:04:32.269 "name": "Malloc2", 00:04:32.269 "aliases": [ 00:04:32.269 "0758513f-f779-4a3c-85f2-2503e1f1317d" 00:04:32.269 ], 00:04:32.269 "product_name": "Malloc disk", 00:04:32.269 "block_size": 512, 00:04:32.269 "num_blocks": 16384, 00:04:32.269 "uuid": "0758513f-f779-4a3c-85f2-2503e1f1317d", 00:04:32.269 "assigned_rate_limits": { 00:04:32.269 "rw_ios_per_sec": 0, 00:04:32.269 "rw_mbytes_per_sec": 0, 00:04:32.269 "r_mbytes_per_sec": 0, 00:04:32.269 "w_mbytes_per_sec": 0 00:04:32.269 }, 00:04:32.269 "claimed": true, 00:04:32.269 "claim_type": "exclusive_write", 00:04:32.269 "zoned": false, 00:04:32.269 "supported_io_types": { 00:04:32.269 "read": true, 00:04:32.269 "write": true, 00:04:32.269 "unmap": true, 00:04:32.269 "write_zeroes": true, 00:04:32.269 "flush": true, 00:04:32.269 "reset": true, 00:04:32.269 "compare": false, 00:04:32.269 "compare_and_write": false, 00:04:32.269 "abort": true, 00:04:32.269 "nvme_admin": false, 00:04:32.269 "nvme_io": false 00:04:32.269 }, 00:04:32.269 "memory_domains": [ 00:04:32.269 { 00:04:32.269 "dma_device_id": "system", 00:04:32.269 "dma_device_type": 1 00:04:32.269 }, 00:04:32.269 { 00:04:32.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.269 "dma_device_type": 2 00:04:32.269 } 00:04:32.269 ], 00:04:32.269 "driver_specific": {} 00:04:32.269 }, 00:04:32.269 { 00:04:32.269 "name": "Passthru0", 00:04:32.269 "aliases": [ 00:04:32.269 "51078260-7412-571c-8ac7-f54d37ea89a7" 00:04:32.269 ], 00:04:32.269 "product_name": "passthru", 00:04:32.269 "block_size": 512, 00:04:32.269 "num_blocks": 16384, 00:04:32.269 "uuid": "51078260-7412-571c-8ac7-f54d37ea89a7", 00:04:32.269 "assigned_rate_limits": { 00:04:32.269 "rw_ios_per_sec": 0, 00:04:32.269 "rw_mbytes_per_sec": 0, 00:04:32.269 "r_mbytes_per_sec": 0, 00:04:32.269 "w_mbytes_per_sec": 0 00:04:32.269 }, 00:04:32.269 "claimed": false, 00:04:32.269 "zoned": false, 00:04:32.269 "supported_io_types": { 00:04:32.269 "read": true, 00:04:32.269 "write": true, 00:04:32.269 "unmap": true, 00:04:32.269 "write_zeroes": true, 00:04:32.269 "flush": true, 00:04:32.269 "reset": true, 00:04:32.269 "compare": false, 00:04:32.269 "compare_and_write": false, 00:04:32.269 "abort": true, 00:04:32.269 "nvme_admin": false, 00:04:32.269 "nvme_io": false 00:04:32.269 }, 00:04:32.269 "memory_domains": [ 00:04:32.269 { 00:04:32.269 "dma_device_id": "system", 00:04:32.269 "dma_device_type": 1 00:04:32.269 }, 00:04:32.269 { 00:04:32.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.269 "dma_device_type": 2 00:04:32.269 } 00:04:32.269 ], 00:04:32.269 "driver_specific": { 00:04:32.269 "passthru": { 00:04:32.269 "name": "Passthru0", 00:04:32.269 "base_bdev_name": "Malloc2" 00:04:32.269 } 00:04:32.269 } 00:04:32.269 } 00:04:32.269 ]' 00:04:32.269 21:19:55 -- rpc/rpc.sh@21 -- # jq length 00:04:32.528 21:19:55 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:32.528 21:19:55 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:32.528 21:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:32.528 21:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:32.528 21:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:32.528 21:19:55 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:32.528 21:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:32.528 21:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:32.528 21:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:32.528 21:19:55 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:32.528 21:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:32.528 21:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:32.528 21:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:32.528 21:19:55 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:32.528 21:19:55 -- rpc/rpc.sh@26 -- # jq length 00:04:32.528 21:19:55 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:32.528 00:04:32.528 real 0m0.282s 00:04:32.528 user 0m0.173s 00:04:32.528 sys 0m0.048s 00:04:32.528 21:19:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:32.528 21:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:32.528 ************************************ 00:04:32.528 END TEST rpc_daemon_integrity 00:04:32.528 ************************************ 00:04:32.528 21:19:55 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:32.528 21:19:55 -- rpc/rpc.sh@84 -- # killprocess 2671763 00:04:32.528 21:19:55 -- common/autotest_common.sh@936 -- # '[' -z 2671763 ']' 00:04:32.528 21:19:55 -- common/autotest_common.sh@940 -- # kill -0 2671763 00:04:32.528 21:19:55 -- common/autotest_common.sh@941 -- # uname 00:04:32.528 21:19:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:32.528 21:19:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2671763 00:04:32.528 21:19:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:32.528 21:19:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:32.528 21:19:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2671763' 00:04:32.528 killing process with pid 2671763 00:04:32.528 21:19:55 -- common/autotest_common.sh@955 -- # kill 2671763 00:04:32.528 21:19:55 -- common/autotest_common.sh@960 -- # wait 2671763 00:04:33.096 00:04:33.096 real 0m3.073s 00:04:33.096 user 0m3.888s 00:04:33.096 sys 0m1.059s 00:04:33.096 21:19:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:33.096 21:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:33.096 ************************************ 00:04:33.096 END TEST rpc 00:04:33.096 ************************************ 00:04:33.096 21:19:55 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:33.096 21:19:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.096 21:19:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.096 21:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:33.096 ************************************ 00:04:33.096 START TEST skip_rpc 00:04:33.096 ************************************ 00:04:33.096 21:19:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:33.096 * Looking for test storage... 00:04:33.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:33.096 21:19:55 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:33.096 21:19:55 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:33.096 21:19:55 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:33.096 21:19:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.096 21:19:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.096 21:19:55 -- common/autotest_common.sh@10 -- # set +x 00:04:33.433 ************************************ 00:04:33.433 START TEST skip_rpc 00:04:33.433 ************************************ 00:04:33.433 21:19:56 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:33.433 21:19:56 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2672536 00:04:33.433 21:19:56 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.433 21:19:56 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:33.433 21:19:56 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:33.433 [2024-04-24 21:19:56.171415] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:04:33.433 [2024-04-24 21:19:56.171460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672536 ] 00:04:33.433 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.433 [2024-04-24 21:19:56.240040] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.433 [2024-04-24 21:19:56.306824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.703 21:20:01 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:38.703 21:20:01 -- common/autotest_common.sh@638 -- # local es=0 00:04:38.703 21:20:01 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:38.703 21:20:01 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:38.703 21:20:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:38.703 21:20:01 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:38.703 21:20:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:38.703 21:20:01 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:04:38.703 21:20:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:38.703 21:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:38.703 21:20:01 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:38.703 21:20:01 -- common/autotest_common.sh@641 -- # es=1 00:04:38.703 21:20:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:38.703 21:20:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:38.703 21:20:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:38.703 21:20:01 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:38.703 21:20:01 -- rpc/skip_rpc.sh@23 -- # killprocess 2672536 00:04:38.703 21:20:01 -- common/autotest_common.sh@936 -- # '[' -z 2672536 ']' 00:04:38.703 21:20:01 -- common/autotest_common.sh@940 -- # kill -0 2672536 00:04:38.703 21:20:01 -- common/autotest_common.sh@941 -- # uname 00:04:38.703 21:20:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:38.703 21:20:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2672536 00:04:38.703 21:20:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:38.703 21:20:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:38.703 21:20:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2672536' 00:04:38.703 killing process with pid 2672536 00:04:38.703 21:20:01 -- common/autotest_common.sh@955 -- # kill 2672536 00:04:38.703 21:20:01 -- common/autotest_common.sh@960 -- # wait 2672536 00:04:38.703 00:04:38.704 real 0m5.385s 00:04:38.704 user 0m5.142s 00:04:38.704 sys 0m0.272s 00:04:38.704 21:20:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:38.704 21:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:38.704 ************************************ 00:04:38.704 END TEST skip_rpc 00:04:38.704 ************************************ 00:04:38.704 21:20:01 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:38.704 21:20:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.704 21:20:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.704 21:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:38.962 ************************************ 00:04:38.962 START TEST skip_rpc_with_json 00:04:38.962 ************************************ 00:04:38.962 21:20:01 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:04:38.962 21:20:01 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:38.962 21:20:01 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2673486 00:04:38.962 21:20:01 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.962 21:20:01 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.962 21:20:01 -- rpc/skip_rpc.sh@31 -- # waitforlisten 2673486 00:04:38.962 21:20:01 -- common/autotest_common.sh@817 -- # '[' -z 2673486 ']' 00:04:38.962 21:20:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.962 21:20:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:38.962 21:20:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.962 21:20:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:38.962 21:20:01 -- common/autotest_common.sh@10 -- # set +x 00:04:38.962 [2024-04-24 21:20:01.769197] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:04:38.962 [2024-04-24 21:20:01.769244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673486 ] 00:04:38.962 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.962 [2024-04-24 21:20:01.841564] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.223 [2024-04-24 21:20:01.911581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.789 21:20:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:39.789 21:20:02 -- common/autotest_common.sh@850 -- # return 0 00:04:39.789 21:20:02 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:39.789 21:20:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:39.789 21:20:02 -- common/autotest_common.sh@10 -- # set +x 00:04:39.789 [2024-04-24 21:20:02.554689] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:39.789 request: 00:04:39.789 { 00:04:39.789 "trtype": "tcp", 00:04:39.789 "method": "nvmf_get_transports", 00:04:39.789 "req_id": 1 00:04:39.789 } 00:04:39.789 Got JSON-RPC error response 00:04:39.789 response: 00:04:39.789 { 00:04:39.789 "code": -19, 00:04:39.789 "message": "No such device" 00:04:39.789 } 00:04:39.789 21:20:02 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:39.789 21:20:02 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:39.789 21:20:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:39.789 21:20:02 -- common/autotest_common.sh@10 -- # set +x 00:04:39.789 [2024-04-24 21:20:02.562773] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:39.789 21:20:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:39.789 21:20:02 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:39.789 21:20:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:39.789 21:20:02 -- common/autotest_common.sh@10 -- # set +x 00:04:40.047 21:20:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:40.047 21:20:02 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:40.047 { 00:04:40.047 "subsystems": [ 00:04:40.047 { 00:04:40.047 "subsystem": "vfio_user_target", 00:04:40.047 "config": null 00:04:40.047 }, 00:04:40.047 { 00:04:40.047 "subsystem": "keyring", 00:04:40.047 "config": [] 00:04:40.047 }, 00:04:40.047 { 00:04:40.047 "subsystem": "iobuf", 00:04:40.047 "config": [ 00:04:40.047 { 00:04:40.047 "method": "iobuf_set_options", 00:04:40.047 "params": { 00:04:40.047 "small_pool_count": 8192, 00:04:40.047 "large_pool_count": 1024, 00:04:40.047 "small_bufsize": 8192, 00:04:40.047 "large_bufsize": 135168 00:04:40.047 } 00:04:40.047 } 00:04:40.047 ] 00:04:40.047 }, 00:04:40.047 { 00:04:40.047 "subsystem": "sock", 00:04:40.047 "config": [ 00:04:40.047 { 00:04:40.047 "method": "sock_impl_set_options", 00:04:40.047 "params": { 00:04:40.047 "impl_name": "posix", 00:04:40.047 "recv_buf_size": 2097152, 00:04:40.047 "send_buf_size": 2097152, 00:04:40.047 "enable_recv_pipe": true, 00:04:40.047 "enable_quickack": false, 00:04:40.047 "enable_placement_id": 0, 00:04:40.047 "enable_zerocopy_send_server": true, 00:04:40.047 "enable_zerocopy_send_client": false, 00:04:40.047 "zerocopy_threshold": 0, 00:04:40.047 "tls_version": 0, 00:04:40.047 "enable_ktls": false 00:04:40.047 } 00:04:40.047 }, 00:04:40.047 { 00:04:40.047 "method": "sock_impl_set_options", 00:04:40.047 "params": { 00:04:40.047 "impl_name": "ssl", 00:04:40.047 "recv_buf_size": 4096, 00:04:40.047 "send_buf_size": 4096, 00:04:40.047 "enable_recv_pipe": true, 00:04:40.047 "enable_quickack": false, 00:04:40.047 "enable_placement_id": 0, 00:04:40.047 "enable_zerocopy_send_server": true, 00:04:40.047 "enable_zerocopy_send_client": false, 00:04:40.047 "zerocopy_threshold": 0, 00:04:40.047 "tls_version": 0, 00:04:40.047 "enable_ktls": false 00:04:40.047 } 00:04:40.047 } 00:04:40.047 ] 00:04:40.047 }, 00:04:40.047 { 00:04:40.047 "subsystem": "vmd", 00:04:40.047 "config": [] 00:04:40.047 }, 00:04:40.047 { 00:04:40.047 "subsystem": "accel", 00:04:40.047 "config": [ 00:04:40.047 { 00:04:40.047 "method": "accel_set_options", 00:04:40.047 "params": { 00:04:40.047 "small_cache_size": 128, 00:04:40.047 "large_cache_size": 16, 00:04:40.047 "task_count": 2048, 00:04:40.047 "sequence_count": 2048, 00:04:40.047 "buf_count": 2048 00:04:40.047 } 00:04:40.047 } 00:04:40.047 ] 00:04:40.047 }, 00:04:40.047 { 00:04:40.047 "subsystem": "bdev", 00:04:40.047 "config": [ 00:04:40.047 { 00:04:40.047 "method": "bdev_set_options", 00:04:40.047 "params": { 00:04:40.047 "bdev_io_pool_size": 65535, 00:04:40.047 "bdev_io_cache_size": 256, 00:04:40.047 "bdev_auto_examine": true, 00:04:40.047 "iobuf_small_cache_size": 128, 00:04:40.047 "iobuf_large_cache_size": 16 00:04:40.047 } 00:04:40.047 }, 00:04:40.047 { 00:04:40.047 "method": "bdev_raid_set_options", 00:04:40.047 "params": { 00:04:40.047 "process_window_size_kb": 1024 00:04:40.047 } 00:04:40.047 }, 00:04:40.047 { 00:04:40.047 "method": "bdev_iscsi_set_options", 00:04:40.047 "params": { 00:04:40.047 "timeout_sec": 30 00:04:40.047 } 00:04:40.047 }, 00:04:40.047 { 00:04:40.047 "method": "bdev_nvme_set_options", 00:04:40.047 "params": { 00:04:40.047 "action_on_timeout": "none", 00:04:40.047 "timeout_us": 0, 00:04:40.047 "timeout_admin_us": 0, 00:04:40.047 "keep_alive_timeout_ms": 10000, 00:04:40.048 "arbitration_burst": 0, 00:04:40.048 "low_priority_weight": 0, 00:04:40.048 "medium_priority_weight": 0, 00:04:40.048 "high_priority_weight": 0, 00:04:40.048 "nvme_adminq_poll_period_us": 10000, 00:04:40.048 "nvme_ioq_poll_period_us": 0, 00:04:40.048 "io_queue_requests": 0, 00:04:40.048 "delay_cmd_submit": true, 00:04:40.048 "transport_retry_count": 4, 00:04:40.048 "bdev_retry_count": 3, 00:04:40.048 "transport_ack_timeout": 0, 00:04:40.048 "ctrlr_loss_timeout_sec": 0, 00:04:40.048 "reconnect_delay_sec": 0, 00:04:40.048 "fast_io_fail_timeout_sec": 0, 00:04:40.048 "disable_auto_failback": false, 00:04:40.048 "generate_uuids": false, 00:04:40.048 "transport_tos": 0, 00:04:40.048 "nvme_error_stat": false, 00:04:40.048 "rdma_srq_size": 0, 00:04:40.048 "io_path_stat": false, 00:04:40.048 "allow_accel_sequence": false, 00:04:40.048 "rdma_max_cq_size": 0, 00:04:40.048 "rdma_cm_event_timeout_ms": 0, 00:04:40.048 "dhchap_digests": [ 00:04:40.048 "sha256", 00:04:40.048 "sha384", 00:04:40.048 "sha512" 00:04:40.048 ], 00:04:40.048 "dhchap_dhgroups": [ 00:04:40.048 "null", 00:04:40.048 "ffdhe2048", 00:04:40.048 "ffdhe3072", 00:04:40.048 "ffdhe4096", 00:04:40.048 "ffdhe6144", 00:04:40.048 "ffdhe8192" 00:04:40.048 ] 00:04:40.048 } 00:04:40.048 }, 00:04:40.048 { 00:04:40.048 "method": "bdev_nvme_set_hotplug", 00:04:40.048 "params": { 00:04:40.048 "period_us": 100000, 00:04:40.048 "enable": false 00:04:40.048 } 00:04:40.048 }, 00:04:40.048 { 00:04:40.048 "method": "bdev_wait_for_examine" 00:04:40.048 } 00:04:40.048 ] 00:04:40.048 }, 00:04:40.048 { 00:04:40.048 "subsystem": "scsi", 00:04:40.048 "config": null 00:04:40.048 }, 00:04:40.048 { 00:04:40.048 "subsystem": "scheduler", 00:04:40.048 "config": [ 00:04:40.048 { 00:04:40.048 "method": "framework_set_scheduler", 00:04:40.048 "params": { 00:04:40.048 "name": "static" 00:04:40.048 } 00:04:40.048 } 00:04:40.048 ] 00:04:40.048 }, 00:04:40.048 { 00:04:40.048 "subsystem": "vhost_scsi", 00:04:40.048 "config": [] 00:04:40.048 }, 00:04:40.048 { 00:04:40.048 "subsystem": "vhost_blk", 00:04:40.048 "config": [] 00:04:40.048 }, 00:04:40.048 { 00:04:40.048 "subsystem": "ublk", 00:04:40.048 "config": [] 00:04:40.048 }, 00:04:40.048 { 00:04:40.048 "subsystem": "nbd", 00:04:40.048 "config": [] 00:04:40.048 }, 00:04:40.048 { 00:04:40.048 "subsystem": "nvmf", 00:04:40.048 "config": [ 00:04:40.048 { 00:04:40.048 "method": "nvmf_set_config", 00:04:40.048 "params": { 00:04:40.048 "discovery_filter": "match_any", 00:04:40.048 "admin_cmd_passthru": { 00:04:40.048 "identify_ctrlr": false 00:04:40.048 } 00:04:40.048 } 00:04:40.048 }, 00:04:40.048 { 00:04:40.048 "method": "nvmf_set_max_subsystems", 00:04:40.048 "params": { 00:04:40.048 "max_subsystems": 1024 00:04:40.048 } 00:04:40.048 }, 00:04:40.048 { 00:04:40.048 "method": "nvmf_set_crdt", 00:04:40.048 "params": { 00:04:40.048 "crdt1": 0, 00:04:40.048 "crdt2": 0, 00:04:40.048 "crdt3": 0 00:04:40.048 } 00:04:40.048 }, 00:04:40.048 { 00:04:40.048 "method": "nvmf_create_transport", 00:04:40.048 "params": { 00:04:40.048 "trtype": "TCP", 00:04:40.048 "max_queue_depth": 128, 00:04:40.048 "max_io_qpairs_per_ctrlr": 127, 00:04:40.048 "in_capsule_data_size": 4096, 00:04:40.048 "max_io_size": 131072, 00:04:40.048 "io_unit_size": 131072, 00:04:40.048 "max_aq_depth": 128, 00:04:40.048 "num_shared_buffers": 511, 00:04:40.048 "buf_cache_size": 4294967295, 00:04:40.048 "dif_insert_or_strip": false, 00:04:40.048 "zcopy": false, 00:04:40.048 "c2h_success": true, 00:04:40.048 "sock_priority": 0, 00:04:40.048 "abort_timeout_sec": 1, 00:04:40.048 "ack_timeout": 0, 00:04:40.048 "data_wr_pool_size": 0 00:04:40.048 } 00:04:40.048 } 00:04:40.048 ] 00:04:40.048 }, 00:04:40.048 { 00:04:40.048 "subsystem": "iscsi", 00:04:40.048 "config": [ 00:04:40.048 { 00:04:40.048 "method": "iscsi_set_options", 00:04:40.048 "params": { 00:04:40.048 "node_base": "iqn.2016-06.io.spdk", 00:04:40.048 "max_sessions": 128, 00:04:40.048 "max_connections_per_session": 2, 00:04:40.048 "max_queue_depth": 64, 00:04:40.048 "default_time2wait": 2, 00:04:40.048 "default_time2retain": 20, 00:04:40.048 "first_burst_length": 8192, 00:04:40.048 "immediate_data": true, 00:04:40.048 "allow_duplicated_isid": false, 00:04:40.048 "error_recovery_level": 0, 00:04:40.048 "nop_timeout": 60, 00:04:40.048 "nop_in_interval": 30, 00:04:40.048 "disable_chap": false, 00:04:40.048 "require_chap": false, 00:04:40.048 "mutual_chap": false, 00:04:40.048 "chap_group": 0, 00:04:40.048 "max_large_datain_per_connection": 64, 00:04:40.048 "max_r2t_per_connection": 4, 00:04:40.048 "pdu_pool_size": 36864, 00:04:40.048 "immediate_data_pool_size": 16384, 00:04:40.048 "data_out_pool_size": 2048 00:04:40.048 } 00:04:40.048 } 00:04:40.048 ] 00:04:40.048 } 00:04:40.048 ] 00:04:40.048 } 00:04:40.048 21:20:02 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:40.048 21:20:02 -- rpc/skip_rpc.sh@40 -- # killprocess 2673486 00:04:40.048 21:20:02 -- common/autotest_common.sh@936 -- # '[' -z 2673486 ']' 00:04:40.048 21:20:02 -- common/autotest_common.sh@940 -- # kill -0 2673486 00:04:40.048 21:20:02 -- common/autotest_common.sh@941 -- # uname 00:04:40.048 21:20:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:40.048 21:20:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2673486 00:04:40.048 21:20:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:40.048 21:20:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:40.048 21:20:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2673486' 00:04:40.048 killing process with pid 2673486 00:04:40.048 21:20:02 -- common/autotest_common.sh@955 -- # kill 2673486 00:04:40.048 21:20:02 -- common/autotest_common.sh@960 -- # wait 2673486 00:04:40.306 21:20:03 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2673680 00:04:40.306 21:20:03 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:40.306 21:20:03 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:45.565 21:20:08 -- rpc/skip_rpc.sh@50 -- # killprocess 2673680 00:04:45.565 21:20:08 -- common/autotest_common.sh@936 -- # '[' -z 2673680 ']' 00:04:45.565 21:20:08 -- common/autotest_common.sh@940 -- # kill -0 2673680 00:04:45.565 21:20:08 -- common/autotest_common.sh@941 -- # uname 00:04:45.565 21:20:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:45.565 21:20:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2673680 00:04:45.565 21:20:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:45.565 21:20:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:45.565 21:20:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2673680' 00:04:45.565 killing process with pid 2673680 00:04:45.565 21:20:08 -- common/autotest_common.sh@955 -- # kill 2673680 00:04:45.565 21:20:08 -- common/autotest_common.sh@960 -- # wait 2673680 00:04:45.823 21:20:08 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:45.823 21:20:08 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:45.823 00:04:45.823 real 0m6.786s 00:04:45.823 user 0m6.578s 00:04:45.823 sys 0m0.629s 00:04:45.823 21:20:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:45.823 21:20:08 -- common/autotest_common.sh@10 -- # set +x 00:04:45.823 ************************************ 00:04:45.823 END TEST skip_rpc_with_json 00:04:45.823 ************************************ 00:04:45.823 21:20:08 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:45.823 21:20:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.823 21:20:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.823 21:20:08 -- common/autotest_common.sh@10 -- # set +x 00:04:45.823 ************************************ 00:04:45.823 START TEST skip_rpc_with_delay 00:04:45.823 ************************************ 00:04:45.823 21:20:08 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:04:45.823 21:20:08 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:45.823 21:20:08 -- common/autotest_common.sh@638 -- # local es=0 00:04:45.823 21:20:08 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:45.823 21:20:08 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:45.823 21:20:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:45.823 21:20:08 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:45.823 21:20:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:45.823 21:20:08 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:45.823 21:20:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:45.823 21:20:08 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:45.823 21:20:08 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:45.823 21:20:08 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:46.081 [2024-04-24 21:20:08.738987] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:46.081 [2024-04-24 21:20:08.739052] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:46.081 21:20:08 -- common/autotest_common.sh@641 -- # es=1 00:04:46.081 21:20:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:46.081 21:20:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:46.081 21:20:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:46.081 00:04:46.081 real 0m0.064s 00:04:46.081 user 0m0.034s 00:04:46.081 sys 0m0.029s 00:04:46.081 21:20:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:46.081 21:20:08 -- common/autotest_common.sh@10 -- # set +x 00:04:46.081 ************************************ 00:04:46.081 END TEST skip_rpc_with_delay 00:04:46.081 ************************************ 00:04:46.081 21:20:08 -- rpc/skip_rpc.sh@77 -- # uname 00:04:46.081 21:20:08 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:46.081 21:20:08 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:46.081 21:20:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.081 21:20:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.081 21:20:08 -- common/autotest_common.sh@10 -- # set +x 00:04:46.081 ************************************ 00:04:46.081 START TEST exit_on_failed_rpc_init 00:04:46.081 ************************************ 00:04:46.081 21:20:08 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:04:46.081 21:20:08 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2674789 00:04:46.081 21:20:08 -- rpc/skip_rpc.sh@63 -- # waitforlisten 2674789 00:04:46.081 21:20:08 -- common/autotest_common.sh@817 -- # '[' -z 2674789 ']' 00:04:46.081 21:20:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.081 21:20:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:46.081 21:20:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.081 21:20:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:46.081 21:20:08 -- common/autotest_common.sh@10 -- # set +x 00:04:46.081 21:20:08 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.339 [2024-04-24 21:20:08.992793] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:04:46.340 [2024-04-24 21:20:08.992843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2674789 ] 00:04:46.340 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.340 [2024-04-24 21:20:09.062897] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.340 [2024-04-24 21:20:09.137000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.905 21:20:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:46.905 21:20:09 -- common/autotest_common.sh@850 -- # return 0 00:04:46.905 21:20:09 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.905 21:20:09 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:46.905 21:20:09 -- common/autotest_common.sh@638 -- # local es=0 00:04:46.905 21:20:09 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:46.905 21:20:09 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.905 21:20:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:46.905 21:20:09 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.905 21:20:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:46.905 21:20:09 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.905 21:20:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:46.905 21:20:09 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.905 21:20:09 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:46.905 21:20:09 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:47.163 [2024-04-24 21:20:09.822901] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:04:47.163 [2024-04-24 21:20:09.822950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2675054 ] 00:04:47.163 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.163 [2024-04-24 21:20:09.890818] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.163 [2024-04-24 21:20:09.959006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.163 [2024-04-24 21:20:09.959075] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:47.163 [2024-04-24 21:20:09.959086] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:47.163 [2024-04-24 21:20:09.959094] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:47.163 21:20:10 -- common/autotest_common.sh@641 -- # es=234 00:04:47.163 21:20:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:47.163 21:20:10 -- common/autotest_common.sh@650 -- # es=106 00:04:47.163 21:20:10 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:47.163 21:20:10 -- common/autotest_common.sh@658 -- # es=1 00:04:47.163 21:20:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:47.163 21:20:10 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:47.163 21:20:10 -- rpc/skip_rpc.sh@70 -- # killprocess 2674789 00:04:47.163 21:20:10 -- common/autotest_common.sh@936 -- # '[' -z 2674789 ']' 00:04:47.163 21:20:10 -- common/autotest_common.sh@940 -- # kill -0 2674789 00:04:47.163 21:20:10 -- common/autotest_common.sh@941 -- # uname 00:04:47.163 21:20:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:47.421 21:20:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2674789 00:04:47.421 21:20:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:47.421 21:20:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:47.421 21:20:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2674789' 00:04:47.421 killing process with pid 2674789 00:04:47.421 21:20:10 -- common/autotest_common.sh@955 -- # kill 2674789 00:04:47.421 21:20:10 -- common/autotest_common.sh@960 -- # wait 2674789 00:04:47.681 00:04:47.681 real 0m1.496s 00:04:47.681 user 0m1.685s 00:04:47.681 sys 0m0.437s 00:04:47.681 21:20:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.681 21:20:10 -- common/autotest_common.sh@10 -- # set +x 00:04:47.681 ************************************ 00:04:47.681 END TEST exit_on_failed_rpc_init 00:04:47.681 ************************************ 00:04:47.681 21:20:10 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:47.681 00:04:47.681 real 0m14.609s 00:04:47.681 user 0m13.751s 00:04:47.681 sys 0m1.861s 00:04:47.681 21:20:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.681 21:20:10 -- common/autotest_common.sh@10 -- # set +x 00:04:47.681 ************************************ 00:04:47.681 END TEST skip_rpc 00:04:47.681 ************************************ 00:04:47.681 21:20:10 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:47.681 21:20:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.681 21:20:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.681 21:20:10 -- common/autotest_common.sh@10 -- # set +x 00:04:47.940 ************************************ 00:04:47.940 START TEST rpc_client 00:04:47.940 ************************************ 00:04:47.940 21:20:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:47.940 * Looking for test storage... 00:04:47.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:47.940 21:20:10 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:47.940 OK 00:04:47.940 21:20:10 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:47.940 00:04:47.940 real 0m0.143s 00:04:47.940 user 0m0.063s 00:04:47.940 sys 0m0.091s 00:04:47.940 21:20:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.940 21:20:10 -- common/autotest_common.sh@10 -- # set +x 00:04:47.940 ************************************ 00:04:47.940 END TEST rpc_client 00:04:47.940 ************************************ 00:04:48.199 21:20:10 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:48.199 21:20:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.199 21:20:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.199 21:20:10 -- common/autotest_common.sh@10 -- # set +x 00:04:48.199 ************************************ 00:04:48.199 START TEST json_config 00:04:48.199 ************************************ 00:04:48.199 21:20:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:48.458 21:20:11 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:48.458 21:20:11 -- nvmf/common.sh@7 -- # uname -s 00:04:48.458 21:20:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.458 21:20:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.458 21:20:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.458 21:20:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.458 21:20:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.458 21:20:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.458 21:20:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.458 21:20:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.458 21:20:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.458 21:20:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.458 21:20:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:04:48.458 21:20:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:04:48.458 21:20:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.458 21:20:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.458 21:20:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:48.458 21:20:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:48.458 21:20:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:48.458 21:20:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.459 21:20:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.459 21:20:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.459 21:20:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.459 21:20:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.459 21:20:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.459 21:20:11 -- paths/export.sh@5 -- # export PATH 00:04:48.459 21:20:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.459 21:20:11 -- nvmf/common.sh@47 -- # : 0 00:04:48.459 21:20:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:48.459 21:20:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:48.459 21:20:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:48.459 21:20:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.459 21:20:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.459 21:20:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:48.459 21:20:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:48.459 21:20:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:48.459 21:20:11 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:48.459 21:20:11 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:48.459 21:20:11 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:48.459 21:20:11 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:48.459 21:20:11 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:48.459 21:20:11 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:48.459 21:20:11 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:48.459 21:20:11 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:48.459 21:20:11 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:48.459 21:20:11 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:48.459 21:20:11 -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:48.459 21:20:11 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:48.459 21:20:11 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:48.459 21:20:11 -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:48.459 21:20:11 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:48.459 21:20:11 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:48.459 INFO: JSON configuration test init 00:04:48.459 21:20:11 -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:48.459 21:20:11 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:48.459 21:20:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:48.459 21:20:11 -- common/autotest_common.sh@10 -- # set +x 00:04:48.459 21:20:11 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:48.459 21:20:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:48.459 21:20:11 -- common/autotest_common.sh@10 -- # set +x 00:04:48.459 21:20:11 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:48.459 21:20:11 -- json_config/common.sh@9 -- # local app=target 00:04:48.459 21:20:11 -- json_config/common.sh@10 -- # shift 00:04:48.459 21:20:11 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:48.459 21:20:11 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:48.459 21:20:11 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:48.459 21:20:11 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.459 21:20:11 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.459 21:20:11 -- json_config/common.sh@22 -- # app_pid["$app"]=2675412 00:04:48.459 21:20:11 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:48.459 Waiting for target to run... 00:04:48.459 21:20:11 -- json_config/common.sh@25 -- # waitforlisten 2675412 /var/tmp/spdk_tgt.sock 00:04:48.459 21:20:11 -- common/autotest_common.sh@817 -- # '[' -z 2675412 ']' 00:04:48.459 21:20:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:48.459 21:20:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:48.459 21:20:11 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:48.459 21:20:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:48.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:48.459 21:20:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:48.459 21:20:11 -- common/autotest_common.sh@10 -- # set +x 00:04:48.459 [2024-04-24 21:20:11.197559] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:04:48.459 [2024-04-24 21:20:11.197612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2675412 ] 00:04:48.459 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.719 [2024-04-24 21:20:11.470762] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.719 [2024-04-24 21:20:11.532239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.287 21:20:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:49.287 21:20:11 -- common/autotest_common.sh@850 -- # return 0 00:04:49.287 21:20:11 -- json_config/common.sh@26 -- # echo '' 00:04:49.287 00:04:49.287 21:20:11 -- json_config/json_config.sh@269 -- # create_accel_config 00:04:49.287 21:20:11 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:49.287 21:20:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:49.287 21:20:11 -- common/autotest_common.sh@10 -- # set +x 00:04:49.287 21:20:11 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:49.287 21:20:11 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:49.287 21:20:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:49.287 21:20:11 -- common/autotest_common.sh@10 -- # set +x 00:04:49.287 21:20:12 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:49.287 21:20:12 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:49.287 21:20:12 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:52.576 21:20:15 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:52.576 21:20:15 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:52.576 21:20:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:52.576 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:04:52.576 21:20:15 -- json_config/json_config.sh@45 -- # local ret=0 00:04:52.576 21:20:15 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:52.576 21:20:15 -- json_config/json_config.sh@46 -- # local enabled_types 00:04:52.576 21:20:15 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:52.576 21:20:15 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:52.576 21:20:15 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:52.576 21:20:15 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:52.576 21:20:15 -- json_config/json_config.sh@48 -- # local get_types 00:04:52.576 21:20:15 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:52.576 21:20:15 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:52.576 21:20:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:52.576 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:04:52.576 21:20:15 -- json_config/json_config.sh@55 -- # return 0 00:04:52.576 21:20:15 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:52.576 21:20:15 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:52.576 21:20:15 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:52.576 21:20:15 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:52.576 21:20:15 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:52.576 21:20:15 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:52.576 21:20:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:52.576 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:04:52.576 21:20:15 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:52.576 21:20:15 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:52.576 21:20:15 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:52.577 21:20:15 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:52.577 21:20:15 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:52.836 MallocForNvmf0 00:04:52.836 21:20:15 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:52.836 21:20:15 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:52.836 MallocForNvmf1 00:04:52.836 21:20:15 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:52.836 21:20:15 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:53.095 [2024-04-24 21:20:15.822966] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:53.095 21:20:15 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:53.095 21:20:15 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:53.354 21:20:16 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:53.354 21:20:16 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:53.354 21:20:16 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:53.354 21:20:16 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:53.612 21:20:16 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:53.612 21:20:16 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:53.612 [2024-04-24 21:20:16.481051] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:53.612 21:20:16 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:53.612 21:20:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:53.612 21:20:16 -- common/autotest_common.sh@10 -- # set +x 00:04:53.910 21:20:16 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:53.910 21:20:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:53.910 21:20:16 -- common/autotest_common.sh@10 -- # set +x 00:04:53.910 21:20:16 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:53.910 21:20:16 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:53.910 21:20:16 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:53.910 MallocBdevForConfigChangeCheck 00:04:53.910 21:20:16 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:53.910 21:20:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:53.910 21:20:16 -- common/autotest_common.sh@10 -- # set +x 00:04:54.197 21:20:16 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:54.197 21:20:16 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:54.197 21:20:17 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:54.197 INFO: shutting down applications... 00:04:54.197 21:20:17 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:54.197 21:20:17 -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:54.197 21:20:17 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:54.197 21:20:17 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:56.739 Calling clear_iscsi_subsystem 00:04:56.739 Calling clear_nvmf_subsystem 00:04:56.739 Calling clear_nbd_subsystem 00:04:56.739 Calling clear_ublk_subsystem 00:04:56.739 Calling clear_vhost_blk_subsystem 00:04:56.739 Calling clear_vhost_scsi_subsystem 00:04:56.739 Calling clear_bdev_subsystem 00:04:56.739 21:20:19 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:56.739 21:20:19 -- json_config/json_config.sh@343 -- # count=100 00:04:56.739 21:20:19 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:56.739 21:20:19 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:56.739 21:20:19 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:56.739 21:20:19 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:56.739 21:20:19 -- json_config/json_config.sh@345 -- # break 00:04:56.739 21:20:19 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:56.739 21:20:19 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:56.739 21:20:19 -- json_config/common.sh@31 -- # local app=target 00:04:56.739 21:20:19 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:56.739 21:20:19 -- json_config/common.sh@35 -- # [[ -n 2675412 ]] 00:04:56.739 21:20:19 -- json_config/common.sh@38 -- # kill -SIGINT 2675412 00:04:56.739 21:20:19 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:56.739 21:20:19 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.739 21:20:19 -- json_config/common.sh@41 -- # kill -0 2675412 00:04:56.739 21:20:19 -- json_config/common.sh@45 -- # sleep 0.5 00:04:57.309 21:20:20 -- json_config/common.sh@40 -- # (( i++ )) 00:04:57.309 21:20:20 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:57.309 21:20:20 -- json_config/common.sh@41 -- # kill -0 2675412 00:04:57.309 21:20:20 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:57.309 21:20:20 -- json_config/common.sh@43 -- # break 00:04:57.309 21:20:20 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:57.309 21:20:20 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:57.309 SPDK target shutdown done 00:04:57.309 21:20:20 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:57.309 INFO: relaunching applications... 00:04:57.309 21:20:20 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:57.309 21:20:20 -- json_config/common.sh@9 -- # local app=target 00:04:57.309 21:20:20 -- json_config/common.sh@10 -- # shift 00:04:57.309 21:20:20 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:57.309 21:20:20 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:57.309 21:20:20 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:57.309 21:20:20 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:57.309 21:20:20 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:57.309 21:20:20 -- json_config/common.sh@22 -- # app_pid["$app"]=2677012 00:04:57.309 21:20:20 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:57.309 Waiting for target to run... 00:04:57.309 21:20:20 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:57.309 21:20:20 -- json_config/common.sh@25 -- # waitforlisten 2677012 /var/tmp/spdk_tgt.sock 00:04:57.309 21:20:20 -- common/autotest_common.sh@817 -- # '[' -z 2677012 ']' 00:04:57.309 21:20:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:57.309 21:20:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:57.309 21:20:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:57.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:57.309 21:20:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:57.309 21:20:20 -- common/autotest_common.sh@10 -- # set +x 00:04:57.309 [2024-04-24 21:20:20.133104] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:04:57.309 [2024-04-24 21:20:20.133162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2677012 ] 00:04:57.309 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.877 [2024-04-24 21:20:20.570134] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.877 [2024-04-24 21:20:20.650014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.166 [2024-04-24 21:20:23.665088] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:01.166 [2024-04-24 21:20:23.697472] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:01.425 21:20:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:01.425 21:20:24 -- common/autotest_common.sh@850 -- # return 0 00:05:01.425 21:20:24 -- json_config/common.sh@26 -- # echo '' 00:05:01.425 00:05:01.425 21:20:24 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:01.425 21:20:24 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:01.425 INFO: Checking if target configuration is the same... 00:05:01.425 21:20:24 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:01.425 21:20:24 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:01.425 21:20:24 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:01.425 + '[' 2 -ne 2 ']' 00:05:01.425 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:01.425 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:01.425 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:01.425 +++ basename /dev/fd/62 00:05:01.425 ++ mktemp /tmp/62.XXX 00:05:01.425 + tmp_file_1=/tmp/62.CmS 00:05:01.425 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:01.425 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:01.425 + tmp_file_2=/tmp/spdk_tgt_config.json.HaJ 00:05:01.425 + ret=0 00:05:01.425 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:01.684 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:01.944 + diff -u /tmp/62.CmS /tmp/spdk_tgt_config.json.HaJ 00:05:01.944 + echo 'INFO: JSON config files are the same' 00:05:01.944 INFO: JSON config files are the same 00:05:01.944 + rm /tmp/62.CmS /tmp/spdk_tgt_config.json.HaJ 00:05:01.944 + exit 0 00:05:01.944 21:20:24 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:01.944 21:20:24 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:01.944 INFO: changing configuration and checking if this can be detected... 00:05:01.944 21:20:24 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:01.944 21:20:24 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:01.944 21:20:24 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:01.944 21:20:24 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:01.944 21:20:24 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:01.944 + '[' 2 -ne 2 ']' 00:05:01.944 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:01.944 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:01.944 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:01.944 +++ basename /dev/fd/62 00:05:01.944 ++ mktemp /tmp/62.XXX 00:05:01.944 + tmp_file_1=/tmp/62.7TM 00:05:01.944 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:01.944 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:01.944 + tmp_file_2=/tmp/spdk_tgt_config.json.zbF 00:05:01.944 + ret=0 00:05:01.944 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:02.514 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:02.514 + diff -u /tmp/62.7TM /tmp/spdk_tgt_config.json.zbF 00:05:02.514 + ret=1 00:05:02.514 + echo '=== Start of file: /tmp/62.7TM ===' 00:05:02.514 + cat /tmp/62.7TM 00:05:02.514 + echo '=== End of file: /tmp/62.7TM ===' 00:05:02.514 + echo '' 00:05:02.514 + echo '=== Start of file: /tmp/spdk_tgt_config.json.zbF ===' 00:05:02.514 + cat /tmp/spdk_tgt_config.json.zbF 00:05:02.514 + echo '=== End of file: /tmp/spdk_tgt_config.json.zbF ===' 00:05:02.514 + echo '' 00:05:02.514 + rm /tmp/62.7TM /tmp/spdk_tgt_config.json.zbF 00:05:02.514 + exit 1 00:05:02.514 21:20:25 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:02.514 INFO: configuration change detected. 00:05:02.514 21:20:25 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:02.514 21:20:25 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:02.514 21:20:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:02.514 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:05:02.514 21:20:25 -- json_config/json_config.sh@307 -- # local ret=0 00:05:02.514 21:20:25 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:02.514 21:20:25 -- json_config/json_config.sh@317 -- # [[ -n 2677012 ]] 00:05:02.514 21:20:25 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:02.514 21:20:25 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:02.514 21:20:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:02.514 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:05:02.514 21:20:25 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:02.514 21:20:25 -- json_config/json_config.sh@193 -- # uname -s 00:05:02.514 21:20:25 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:02.514 21:20:25 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:02.514 21:20:25 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:02.514 21:20:25 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:02.514 21:20:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:02.514 21:20:25 -- common/autotest_common.sh@10 -- # set +x 00:05:02.514 21:20:25 -- json_config/json_config.sh@323 -- # killprocess 2677012 00:05:02.514 21:20:25 -- common/autotest_common.sh@936 -- # '[' -z 2677012 ']' 00:05:02.514 21:20:25 -- common/autotest_common.sh@940 -- # kill -0 2677012 00:05:02.514 21:20:25 -- common/autotest_common.sh@941 -- # uname 00:05:02.514 21:20:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:02.514 21:20:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2677012 00:05:02.514 21:20:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:02.514 21:20:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:02.514 21:20:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2677012' 00:05:02.514 killing process with pid 2677012 00:05:02.514 21:20:25 -- common/autotest_common.sh@955 -- # kill 2677012 00:05:02.514 21:20:25 -- common/autotest_common.sh@960 -- # wait 2677012 00:05:05.054 21:20:27 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.054 21:20:27 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:05.054 21:20:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:05.054 21:20:27 -- common/autotest_common.sh@10 -- # set +x 00:05:05.054 21:20:27 -- json_config/json_config.sh@328 -- # return 0 00:05:05.054 21:20:27 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:05.054 INFO: Success 00:05:05.054 00:05:05.054 real 0m16.407s 00:05:05.054 user 0m16.871s 00:05:05.054 sys 0m2.168s 00:05:05.054 21:20:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.054 21:20:27 -- common/autotest_common.sh@10 -- # set +x 00:05:05.054 ************************************ 00:05:05.054 END TEST json_config 00:05:05.054 ************************************ 00:05:05.054 21:20:27 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:05.054 21:20:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.054 21:20:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.054 21:20:27 -- common/autotest_common.sh@10 -- # set +x 00:05:05.054 ************************************ 00:05:05.054 START TEST json_config_extra_key 00:05:05.054 ************************************ 00:05:05.054 21:20:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:05.054 21:20:27 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:05.054 21:20:27 -- nvmf/common.sh@7 -- # uname -s 00:05:05.054 21:20:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.054 21:20:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.054 21:20:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.054 21:20:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.054 21:20:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.054 21:20:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.054 21:20:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.054 21:20:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.055 21:20:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.055 21:20:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:05.055 21:20:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:05:05.055 21:20:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:05:05.055 21:20:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:05.055 21:20:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:05.055 21:20:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:05.055 21:20:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:05.055 21:20:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:05.055 21:20:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.055 21:20:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.055 21:20:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.055 21:20:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.055 21:20:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.055 21:20:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.055 21:20:27 -- paths/export.sh@5 -- # export PATH 00:05:05.055 21:20:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.055 21:20:27 -- nvmf/common.sh@47 -- # : 0 00:05:05.055 21:20:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:05.055 21:20:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:05.055 21:20:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:05.055 21:20:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:05.055 21:20:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:05.055 21:20:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:05.055 21:20:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:05.055 21:20:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:05.055 21:20:27 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:05.055 21:20:27 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:05.055 21:20:27 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:05.055 21:20:27 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:05.055 21:20:27 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:05.055 21:20:27 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:05.055 21:20:27 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:05.055 21:20:27 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:05.055 21:20:27 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:05.055 21:20:27 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:05.055 21:20:27 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:05.055 INFO: launching applications... 00:05:05.055 21:20:27 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:05.055 21:20:27 -- json_config/common.sh@9 -- # local app=target 00:05:05.055 21:20:27 -- json_config/common.sh@10 -- # shift 00:05:05.055 21:20:27 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:05.055 21:20:27 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:05.055 21:20:27 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:05.055 21:20:27 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.055 21:20:27 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.055 21:20:27 -- json_config/common.sh@22 -- # app_pid["$app"]=2678499 00:05:05.055 21:20:27 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:05.055 Waiting for target to run... 00:05:05.055 21:20:27 -- json_config/common.sh@25 -- # waitforlisten 2678499 /var/tmp/spdk_tgt.sock 00:05:05.055 21:20:27 -- common/autotest_common.sh@817 -- # '[' -z 2678499 ']' 00:05:05.055 21:20:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:05.055 21:20:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:05.055 21:20:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:05.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:05.055 21:20:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:05.055 21:20:27 -- common/autotest_common.sh@10 -- # set +x 00:05:05.055 21:20:27 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:05.055 [2024-04-24 21:20:27.775240] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:05.055 [2024-04-24 21:20:27.775290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2678499 ] 00:05:05.055 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.623 [2024-04-24 21:20:28.213951] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.623 [2024-04-24 21:20:28.295644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.883 21:20:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:05.883 21:20:28 -- common/autotest_common.sh@850 -- # return 0 00:05:05.883 21:20:28 -- json_config/common.sh@26 -- # echo '' 00:05:05.883 00:05:05.883 21:20:28 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:05.883 INFO: shutting down applications... 00:05:05.883 21:20:28 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:05.883 21:20:28 -- json_config/common.sh@31 -- # local app=target 00:05:05.883 21:20:28 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:05.883 21:20:28 -- json_config/common.sh@35 -- # [[ -n 2678499 ]] 00:05:05.883 21:20:28 -- json_config/common.sh@38 -- # kill -SIGINT 2678499 00:05:05.883 21:20:28 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:05.883 21:20:28 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.883 21:20:28 -- json_config/common.sh@41 -- # kill -0 2678499 00:05:05.883 21:20:28 -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.453 21:20:29 -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.453 21:20:29 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.453 21:20:29 -- json_config/common.sh@41 -- # kill -0 2678499 00:05:06.453 21:20:29 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:06.453 21:20:29 -- json_config/common.sh@43 -- # break 00:05:06.453 21:20:29 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:06.453 21:20:29 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:06.453 SPDK target shutdown done 00:05:06.453 21:20:29 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:06.453 Success 00:05:06.453 00:05:06.453 real 0m1.420s 00:05:06.453 user 0m1.010s 00:05:06.453 sys 0m0.568s 00:05:06.453 21:20:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:06.453 21:20:29 -- common/autotest_common.sh@10 -- # set +x 00:05:06.453 ************************************ 00:05:06.453 END TEST json_config_extra_key 00:05:06.453 ************************************ 00:05:06.453 21:20:29 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:06.453 21:20:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.453 21:20:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.453 21:20:29 -- common/autotest_common.sh@10 -- # set +x 00:05:06.453 ************************************ 00:05:06.453 START TEST alias_rpc 00:05:06.453 ************************************ 00:05:06.453 21:20:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:06.713 * Looking for test storage... 00:05:06.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:06.713 21:20:29 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:06.713 21:20:29 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2678881 00:05:06.713 21:20:29 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2678881 00:05:06.713 21:20:29 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.713 21:20:29 -- common/autotest_common.sh@817 -- # '[' -z 2678881 ']' 00:05:06.713 21:20:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.713 21:20:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:06.713 21:20:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.713 21:20:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:06.713 21:20:29 -- common/autotest_common.sh@10 -- # set +x 00:05:06.713 [2024-04-24 21:20:29.429104] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:06.713 [2024-04-24 21:20:29.429157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2678881 ] 00:05:06.713 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.713 [2024-04-24 21:20:29.499429] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.713 [2024-04-24 21:20:29.575182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.695 21:20:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:07.696 21:20:30 -- common/autotest_common.sh@850 -- # return 0 00:05:07.696 21:20:30 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:07.696 21:20:30 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2678881 00:05:07.696 21:20:30 -- common/autotest_common.sh@936 -- # '[' -z 2678881 ']' 00:05:07.696 21:20:30 -- common/autotest_common.sh@940 -- # kill -0 2678881 00:05:07.696 21:20:30 -- common/autotest_common.sh@941 -- # uname 00:05:07.696 21:20:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:07.696 21:20:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2678881 00:05:07.696 21:20:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:07.696 21:20:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:07.696 21:20:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2678881' 00:05:07.696 killing process with pid 2678881 00:05:07.696 21:20:30 -- common/autotest_common.sh@955 -- # kill 2678881 00:05:07.696 21:20:30 -- common/autotest_common.sh@960 -- # wait 2678881 00:05:07.956 00:05:07.956 real 0m1.534s 00:05:07.956 user 0m1.605s 00:05:07.956 sys 0m0.468s 00:05:07.956 21:20:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.956 21:20:30 -- common/autotest_common.sh@10 -- # set +x 00:05:07.956 ************************************ 00:05:07.956 END TEST alias_rpc 00:05:07.956 ************************************ 00:05:07.956 21:20:30 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:07.956 21:20:30 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:07.956 21:20:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.956 21:20:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.956 21:20:30 -- common/autotest_common.sh@10 -- # set +x 00:05:08.217 ************************************ 00:05:08.218 START TEST spdkcli_tcp 00:05:08.218 ************************************ 00:05:08.218 21:20:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:08.218 * Looking for test storage... 00:05:08.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:08.478 21:20:31 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:08.478 21:20:31 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:08.478 21:20:31 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:08.478 21:20:31 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:08.478 21:20:31 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:08.478 21:20:31 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:08.478 21:20:31 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:08.478 21:20:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:08.478 21:20:31 -- common/autotest_common.sh@10 -- # set +x 00:05:08.478 21:20:31 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2679284 00:05:08.478 21:20:31 -- spdkcli/tcp.sh@27 -- # waitforlisten 2679284 00:05:08.478 21:20:31 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:08.478 21:20:31 -- common/autotest_common.sh@817 -- # '[' -z 2679284 ']' 00:05:08.478 21:20:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.478 21:20:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:08.478 21:20:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.478 21:20:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:08.478 21:20:31 -- common/autotest_common.sh@10 -- # set +x 00:05:08.479 [2024-04-24 21:20:31.165837] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:08.479 [2024-04-24 21:20:31.165880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679284 ] 00:05:08.479 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.479 [2024-04-24 21:20:31.234649] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.479 [2024-04-24 21:20:31.303602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.479 [2024-04-24 21:20:31.303604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.419 21:20:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:09.419 21:20:31 -- common/autotest_common.sh@850 -- # return 0 00:05:09.419 21:20:31 -- spdkcli/tcp.sh@31 -- # socat_pid=2679296 00:05:09.419 21:20:31 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:09.419 21:20:31 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:09.419 [ 00:05:09.419 "bdev_malloc_delete", 00:05:09.419 "bdev_malloc_create", 00:05:09.419 "bdev_null_resize", 00:05:09.419 "bdev_null_delete", 00:05:09.419 "bdev_null_create", 00:05:09.419 "bdev_nvme_cuse_unregister", 00:05:09.419 "bdev_nvme_cuse_register", 00:05:09.419 "bdev_opal_new_user", 00:05:09.419 "bdev_opal_set_lock_state", 00:05:09.419 "bdev_opal_delete", 00:05:09.419 "bdev_opal_get_info", 00:05:09.419 "bdev_opal_create", 00:05:09.419 "bdev_nvme_opal_revert", 00:05:09.419 "bdev_nvme_opal_init", 00:05:09.419 "bdev_nvme_send_cmd", 00:05:09.419 "bdev_nvme_get_path_iostat", 00:05:09.419 "bdev_nvme_get_mdns_discovery_info", 00:05:09.419 "bdev_nvme_stop_mdns_discovery", 00:05:09.419 "bdev_nvme_start_mdns_discovery", 00:05:09.419 "bdev_nvme_set_multipath_policy", 00:05:09.419 "bdev_nvme_set_preferred_path", 00:05:09.419 "bdev_nvme_get_io_paths", 00:05:09.419 "bdev_nvme_remove_error_injection", 00:05:09.419 "bdev_nvme_add_error_injection", 00:05:09.419 "bdev_nvme_get_discovery_info", 00:05:09.419 "bdev_nvme_stop_discovery", 00:05:09.419 "bdev_nvme_start_discovery", 00:05:09.419 "bdev_nvme_get_controller_health_info", 00:05:09.419 "bdev_nvme_disable_controller", 00:05:09.419 "bdev_nvme_enable_controller", 00:05:09.419 "bdev_nvme_reset_controller", 00:05:09.419 "bdev_nvme_get_transport_statistics", 00:05:09.419 "bdev_nvme_apply_firmware", 00:05:09.419 "bdev_nvme_detach_controller", 00:05:09.419 "bdev_nvme_get_controllers", 00:05:09.419 "bdev_nvme_attach_controller", 00:05:09.419 "bdev_nvme_set_hotplug", 00:05:09.419 "bdev_nvme_set_options", 00:05:09.419 "bdev_passthru_delete", 00:05:09.419 "bdev_passthru_create", 00:05:09.419 "bdev_lvol_grow_lvstore", 00:05:09.419 "bdev_lvol_get_lvols", 00:05:09.419 "bdev_lvol_get_lvstores", 00:05:09.419 "bdev_lvol_delete", 00:05:09.419 "bdev_lvol_set_read_only", 00:05:09.419 "bdev_lvol_resize", 00:05:09.419 "bdev_lvol_decouple_parent", 00:05:09.419 "bdev_lvol_inflate", 00:05:09.419 "bdev_lvol_rename", 00:05:09.419 "bdev_lvol_clone_bdev", 00:05:09.419 "bdev_lvol_clone", 00:05:09.419 "bdev_lvol_snapshot", 00:05:09.419 "bdev_lvol_create", 00:05:09.419 "bdev_lvol_delete_lvstore", 00:05:09.419 "bdev_lvol_rename_lvstore", 00:05:09.419 "bdev_lvol_create_lvstore", 00:05:09.419 "bdev_raid_set_options", 00:05:09.419 "bdev_raid_remove_base_bdev", 00:05:09.419 "bdev_raid_add_base_bdev", 00:05:09.419 "bdev_raid_delete", 00:05:09.419 "bdev_raid_create", 00:05:09.419 "bdev_raid_get_bdevs", 00:05:09.419 "bdev_error_inject_error", 00:05:09.419 "bdev_error_delete", 00:05:09.419 "bdev_error_create", 00:05:09.419 "bdev_split_delete", 00:05:09.419 "bdev_split_create", 00:05:09.419 "bdev_delay_delete", 00:05:09.419 "bdev_delay_create", 00:05:09.419 "bdev_delay_update_latency", 00:05:09.419 "bdev_zone_block_delete", 00:05:09.419 "bdev_zone_block_create", 00:05:09.419 "blobfs_create", 00:05:09.419 "blobfs_detect", 00:05:09.419 "blobfs_set_cache_size", 00:05:09.419 "bdev_aio_delete", 00:05:09.419 "bdev_aio_rescan", 00:05:09.419 "bdev_aio_create", 00:05:09.419 "bdev_ftl_set_property", 00:05:09.419 "bdev_ftl_get_properties", 00:05:09.419 "bdev_ftl_get_stats", 00:05:09.419 "bdev_ftl_unmap", 00:05:09.419 "bdev_ftl_unload", 00:05:09.419 "bdev_ftl_delete", 00:05:09.419 "bdev_ftl_load", 00:05:09.419 "bdev_ftl_create", 00:05:09.419 "bdev_virtio_attach_controller", 00:05:09.419 "bdev_virtio_scsi_get_devices", 00:05:09.419 "bdev_virtio_detach_controller", 00:05:09.419 "bdev_virtio_blk_set_hotplug", 00:05:09.419 "bdev_iscsi_delete", 00:05:09.419 "bdev_iscsi_create", 00:05:09.419 "bdev_iscsi_set_options", 00:05:09.419 "accel_error_inject_error", 00:05:09.419 "ioat_scan_accel_module", 00:05:09.419 "dsa_scan_accel_module", 00:05:09.419 "iaa_scan_accel_module", 00:05:09.419 "vfu_virtio_create_scsi_endpoint", 00:05:09.419 "vfu_virtio_scsi_remove_target", 00:05:09.419 "vfu_virtio_scsi_add_target", 00:05:09.419 "vfu_virtio_create_blk_endpoint", 00:05:09.419 "vfu_virtio_delete_endpoint", 00:05:09.419 "keyring_file_remove_key", 00:05:09.419 "keyring_file_add_key", 00:05:09.419 "iscsi_get_histogram", 00:05:09.419 "iscsi_enable_histogram", 00:05:09.419 "iscsi_set_options", 00:05:09.419 "iscsi_get_auth_groups", 00:05:09.419 "iscsi_auth_group_remove_secret", 00:05:09.419 "iscsi_auth_group_add_secret", 00:05:09.419 "iscsi_delete_auth_group", 00:05:09.419 "iscsi_create_auth_group", 00:05:09.419 "iscsi_set_discovery_auth", 00:05:09.419 "iscsi_get_options", 00:05:09.419 "iscsi_target_node_request_logout", 00:05:09.419 "iscsi_target_node_set_redirect", 00:05:09.419 "iscsi_target_node_set_auth", 00:05:09.419 "iscsi_target_node_add_lun", 00:05:09.419 "iscsi_get_stats", 00:05:09.419 "iscsi_get_connections", 00:05:09.419 "iscsi_portal_group_set_auth", 00:05:09.419 "iscsi_start_portal_group", 00:05:09.419 "iscsi_delete_portal_group", 00:05:09.419 "iscsi_create_portal_group", 00:05:09.419 "iscsi_get_portal_groups", 00:05:09.419 "iscsi_delete_target_node", 00:05:09.419 "iscsi_target_node_remove_pg_ig_maps", 00:05:09.419 "iscsi_target_node_add_pg_ig_maps", 00:05:09.419 "iscsi_create_target_node", 00:05:09.419 "iscsi_get_target_nodes", 00:05:09.419 "iscsi_delete_initiator_group", 00:05:09.419 "iscsi_initiator_group_remove_initiators", 00:05:09.419 "iscsi_initiator_group_add_initiators", 00:05:09.419 "iscsi_create_initiator_group", 00:05:09.419 "iscsi_get_initiator_groups", 00:05:09.419 "nvmf_set_crdt", 00:05:09.419 "nvmf_set_config", 00:05:09.419 "nvmf_set_max_subsystems", 00:05:09.419 "nvmf_subsystem_get_listeners", 00:05:09.419 "nvmf_subsystem_get_qpairs", 00:05:09.419 "nvmf_subsystem_get_controllers", 00:05:09.419 "nvmf_get_stats", 00:05:09.419 "nvmf_get_transports", 00:05:09.419 "nvmf_create_transport", 00:05:09.419 "nvmf_get_targets", 00:05:09.419 "nvmf_delete_target", 00:05:09.419 "nvmf_create_target", 00:05:09.419 "nvmf_subsystem_allow_any_host", 00:05:09.419 "nvmf_subsystem_remove_host", 00:05:09.419 "nvmf_subsystem_add_host", 00:05:09.419 "nvmf_ns_remove_host", 00:05:09.419 "nvmf_ns_add_host", 00:05:09.419 "nvmf_subsystem_remove_ns", 00:05:09.419 "nvmf_subsystem_add_ns", 00:05:09.419 "nvmf_subsystem_listener_set_ana_state", 00:05:09.419 "nvmf_discovery_get_referrals", 00:05:09.419 "nvmf_discovery_remove_referral", 00:05:09.419 "nvmf_discovery_add_referral", 00:05:09.419 "nvmf_subsystem_remove_listener", 00:05:09.419 "nvmf_subsystem_add_listener", 00:05:09.419 "nvmf_delete_subsystem", 00:05:09.419 "nvmf_create_subsystem", 00:05:09.419 "nvmf_get_subsystems", 00:05:09.419 "env_dpdk_get_mem_stats", 00:05:09.419 "nbd_get_disks", 00:05:09.419 "nbd_stop_disk", 00:05:09.419 "nbd_start_disk", 00:05:09.419 "ublk_recover_disk", 00:05:09.419 "ublk_get_disks", 00:05:09.419 "ublk_stop_disk", 00:05:09.419 "ublk_start_disk", 00:05:09.419 "ublk_destroy_target", 00:05:09.419 "ublk_create_target", 00:05:09.419 "virtio_blk_create_transport", 00:05:09.419 "virtio_blk_get_transports", 00:05:09.419 "vhost_controller_set_coalescing", 00:05:09.419 "vhost_get_controllers", 00:05:09.419 "vhost_delete_controller", 00:05:09.419 "vhost_create_blk_controller", 00:05:09.419 "vhost_scsi_controller_remove_target", 00:05:09.419 "vhost_scsi_controller_add_target", 00:05:09.419 "vhost_start_scsi_controller", 00:05:09.419 "vhost_create_scsi_controller", 00:05:09.419 "thread_set_cpumask", 00:05:09.419 "framework_get_scheduler", 00:05:09.419 "framework_set_scheduler", 00:05:09.419 "framework_get_reactors", 00:05:09.419 "thread_get_io_channels", 00:05:09.419 "thread_get_pollers", 00:05:09.419 "thread_get_stats", 00:05:09.419 "framework_monitor_context_switch", 00:05:09.419 "spdk_kill_instance", 00:05:09.419 "log_enable_timestamps", 00:05:09.419 "log_get_flags", 00:05:09.419 "log_clear_flag", 00:05:09.419 "log_set_flag", 00:05:09.419 "log_get_level", 00:05:09.419 "log_set_level", 00:05:09.420 "log_get_print_level", 00:05:09.420 "log_set_print_level", 00:05:09.420 "framework_enable_cpumask_locks", 00:05:09.420 "framework_disable_cpumask_locks", 00:05:09.420 "framework_wait_init", 00:05:09.420 "framework_start_init", 00:05:09.420 "scsi_get_devices", 00:05:09.420 "bdev_get_histogram", 00:05:09.420 "bdev_enable_histogram", 00:05:09.420 "bdev_set_qos_limit", 00:05:09.420 "bdev_set_qd_sampling_period", 00:05:09.420 "bdev_get_bdevs", 00:05:09.420 "bdev_reset_iostat", 00:05:09.420 "bdev_get_iostat", 00:05:09.420 "bdev_examine", 00:05:09.420 "bdev_wait_for_examine", 00:05:09.420 "bdev_set_options", 00:05:09.420 "notify_get_notifications", 00:05:09.420 "notify_get_types", 00:05:09.420 "accel_get_stats", 00:05:09.420 "accel_set_options", 00:05:09.420 "accel_set_driver", 00:05:09.420 "accel_crypto_key_destroy", 00:05:09.420 "accel_crypto_keys_get", 00:05:09.420 "accel_crypto_key_create", 00:05:09.420 "accel_assign_opc", 00:05:09.420 "accel_get_module_info", 00:05:09.420 "accel_get_opc_assignments", 00:05:09.420 "vmd_rescan", 00:05:09.420 "vmd_remove_device", 00:05:09.420 "vmd_enable", 00:05:09.420 "sock_set_default_impl", 00:05:09.420 "sock_impl_set_options", 00:05:09.420 "sock_impl_get_options", 00:05:09.420 "iobuf_get_stats", 00:05:09.420 "iobuf_set_options", 00:05:09.420 "keyring_get_keys", 00:05:09.420 "framework_get_pci_devices", 00:05:09.420 "framework_get_config", 00:05:09.420 "framework_get_subsystems", 00:05:09.420 "vfu_tgt_set_base_path", 00:05:09.420 "trace_get_info", 00:05:09.420 "trace_get_tpoint_group_mask", 00:05:09.420 "trace_disable_tpoint_group", 00:05:09.420 "trace_enable_tpoint_group", 00:05:09.420 "trace_clear_tpoint_mask", 00:05:09.420 "trace_set_tpoint_mask", 00:05:09.420 "spdk_get_version", 00:05:09.420 "rpc_get_methods" 00:05:09.420 ] 00:05:09.420 21:20:32 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:09.420 21:20:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:09.420 21:20:32 -- common/autotest_common.sh@10 -- # set +x 00:05:09.420 21:20:32 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:09.420 21:20:32 -- spdkcli/tcp.sh@38 -- # killprocess 2679284 00:05:09.420 21:20:32 -- common/autotest_common.sh@936 -- # '[' -z 2679284 ']' 00:05:09.420 21:20:32 -- common/autotest_common.sh@940 -- # kill -0 2679284 00:05:09.420 21:20:32 -- common/autotest_common.sh@941 -- # uname 00:05:09.420 21:20:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:09.420 21:20:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2679284 00:05:09.420 21:20:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:09.420 21:20:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:09.420 21:20:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2679284' 00:05:09.420 killing process with pid 2679284 00:05:09.420 21:20:32 -- common/autotest_common.sh@955 -- # kill 2679284 00:05:09.420 21:20:32 -- common/autotest_common.sh@960 -- # wait 2679284 00:05:09.680 00:05:09.680 real 0m1.549s 00:05:09.680 user 0m2.767s 00:05:09.680 sys 0m0.511s 00:05:09.680 21:20:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:09.680 21:20:32 -- common/autotest_common.sh@10 -- # set +x 00:05:09.680 ************************************ 00:05:09.680 END TEST spdkcli_tcp 00:05:09.680 ************************************ 00:05:09.940 21:20:32 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:09.940 21:20:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.940 21:20:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.940 21:20:32 -- common/autotest_common.sh@10 -- # set +x 00:05:09.940 ************************************ 00:05:09.940 START TEST dpdk_mem_utility 00:05:09.940 ************************************ 00:05:09.940 21:20:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:10.200 * Looking for test storage... 00:05:10.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:10.200 21:20:32 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:10.200 21:20:32 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2679621 00:05:10.200 21:20:32 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.200 21:20:32 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2679621 00:05:10.200 21:20:32 -- common/autotest_common.sh@817 -- # '[' -z 2679621 ']' 00:05:10.200 21:20:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.200 21:20:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:10.200 21:20:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.200 21:20:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:10.200 21:20:32 -- common/autotest_common.sh@10 -- # set +x 00:05:10.200 [2024-04-24 21:20:32.933699] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:10.200 [2024-04-24 21:20:32.933745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679621 ] 00:05:10.200 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.200 [2024-04-24 21:20:33.002940] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.200 [2024-04-24 21:20:33.073091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.138 21:20:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:11.138 21:20:33 -- common/autotest_common.sh@850 -- # return 0 00:05:11.138 21:20:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:11.138 21:20:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:11.138 21:20:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:11.138 21:20:33 -- common/autotest_common.sh@10 -- # set +x 00:05:11.138 { 00:05:11.138 "filename": "/tmp/spdk_mem_dump.txt" 00:05:11.138 } 00:05:11.138 21:20:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:11.138 21:20:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:11.138 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:11.138 1 heaps totaling size 814.000000 MiB 00:05:11.138 size: 814.000000 MiB heap id: 0 00:05:11.138 end heaps---------- 00:05:11.138 8 mempools totaling size 598.116089 MiB 00:05:11.138 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:11.138 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:11.138 size: 84.521057 MiB name: bdev_io_2679621 00:05:11.138 size: 51.011292 MiB name: evtpool_2679621 00:05:11.138 size: 50.003479 MiB name: msgpool_2679621 00:05:11.138 size: 21.763794 MiB name: PDU_Pool 00:05:11.138 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:11.138 size: 0.026123 MiB name: Session_Pool 00:05:11.138 end mempools------- 00:05:11.138 6 memzones totaling size 4.142822 MiB 00:05:11.138 size: 1.000366 MiB name: RG_ring_0_2679621 00:05:11.138 size: 1.000366 MiB name: RG_ring_1_2679621 00:05:11.138 size: 1.000366 MiB name: RG_ring_4_2679621 00:05:11.138 size: 1.000366 MiB name: RG_ring_5_2679621 00:05:11.138 size: 0.125366 MiB name: RG_ring_2_2679621 00:05:11.138 size: 0.015991 MiB name: RG_ring_3_2679621 00:05:11.138 end memzones------- 00:05:11.138 21:20:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:11.138 heap id: 0 total size: 814.000000 MiB number of busy elements: 42 number of free elements: 15 00:05:11.138 list of free elements. size: 12.517212 MiB 00:05:11.138 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:11.138 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:11.138 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:11.138 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:11.138 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:11.138 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:11.138 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:11.139 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:11.139 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:11.139 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:11.139 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:11.139 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:11.139 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:11.139 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:11.139 element at address: 0x200003a00000 with size: 0.353394 MiB 00:05:11.139 list of standard malloc elements. size: 199.220215 MiB 00:05:11.139 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:11.139 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:11.139 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:11.139 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:11.139 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:11.139 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:11.139 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:11.139 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:11.139 element at address: 0x200003aff280 with size: 0.002136 MiB 00:05:11.139 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:11.139 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:11.139 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:11.139 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:11.139 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:11.139 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:11.139 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:11.139 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:11.139 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:11.139 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:11.139 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:11.139 element at address: 0x200003a5a780 with size: 0.000183 MiB 00:05:11.139 element at address: 0x200003adaa40 with size: 0.000183 MiB 00:05:11.139 element at address: 0x200003adac40 with size: 0.000183 MiB 00:05:11.139 element at address: 0x200003adef00 with size: 0.000183 MiB 00:05:11.139 element at address: 0x200003aff1c0 with size: 0.000183 MiB 00:05:11.139 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:11.139 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:11.139 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:11.139 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:11.139 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:11.139 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:11.139 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:11.139 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:11.139 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:11.139 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:11.139 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:11.139 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:11.139 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:11.139 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:11.139 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:11.139 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:11.139 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:11.139 list of memzone associated elements. size: 602.262573 MiB 00:05:11.139 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:11.139 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:11.139 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:11.139 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:11.139 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:11.139 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2679621_0 00:05:11.139 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:11.139 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2679621_0 00:05:11.139 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:11.139 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2679621_0 00:05:11.139 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:11.139 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:11.139 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:11.139 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:11.139 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:11.139 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2679621 00:05:11.139 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:11.139 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2679621 00:05:11.139 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:11.139 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2679621 00:05:11.139 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:11.139 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:11.139 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:11.139 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:11.139 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:11.139 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:11.139 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:11.139 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:11.139 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:11.139 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2679621 00:05:11.139 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:11.139 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2679621 00:05:11.139 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:11.139 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2679621 00:05:11.139 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:11.139 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2679621 00:05:11.139 element at address: 0x200003a5a840 with size: 0.500488 MiB 00:05:11.139 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2679621 00:05:11.139 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:11.139 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:11.139 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:11.139 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:11.139 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:11.139 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:11.139 element at address: 0x200003adefc0 with size: 0.125488 MiB 00:05:11.139 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2679621 00:05:11.139 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:11.139 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:11.139 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:11.139 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:11.139 element at address: 0x200003adad00 with size: 0.016113 MiB 00:05:11.139 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2679621 00:05:11.139 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:11.139 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:11.139 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:11.139 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2679621 00:05:11.139 element at address: 0x200003adab00 with size: 0.000305 MiB 00:05:11.139 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2679621 00:05:11.139 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:11.139 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:11.139 21:20:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:11.139 21:20:33 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2679621 00:05:11.139 21:20:33 -- common/autotest_common.sh@936 -- # '[' -z 2679621 ']' 00:05:11.139 21:20:33 -- common/autotest_common.sh@940 -- # kill -0 2679621 00:05:11.139 21:20:33 -- common/autotest_common.sh@941 -- # uname 00:05:11.139 21:20:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:11.139 21:20:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2679621 00:05:11.139 21:20:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:11.139 21:20:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:11.139 21:20:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2679621' 00:05:11.139 killing process with pid 2679621 00:05:11.139 21:20:33 -- common/autotest_common.sh@955 -- # kill 2679621 00:05:11.139 21:20:33 -- common/autotest_common.sh@960 -- # wait 2679621 00:05:11.399 00:05:11.399 real 0m1.446s 00:05:11.399 user 0m1.457s 00:05:11.399 sys 0m0.464s 00:05:11.399 21:20:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:11.399 21:20:34 -- common/autotest_common.sh@10 -- # set +x 00:05:11.399 ************************************ 00:05:11.399 END TEST dpdk_mem_utility 00:05:11.399 ************************************ 00:05:11.399 21:20:34 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:11.399 21:20:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.399 21:20:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.399 21:20:34 -- common/autotest_common.sh@10 -- # set +x 00:05:11.659 ************************************ 00:05:11.659 START TEST event 00:05:11.659 ************************************ 00:05:11.659 21:20:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:11.659 * Looking for test storage... 00:05:11.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:11.659 21:20:34 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:11.659 21:20:34 -- bdev/nbd_common.sh@6 -- # set -e 00:05:11.659 21:20:34 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:11.659 21:20:34 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:11.659 21:20:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.659 21:20:34 -- common/autotest_common.sh@10 -- # set +x 00:05:11.919 ************************************ 00:05:11.919 START TEST event_perf 00:05:11.919 ************************************ 00:05:11.919 21:20:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:11.919 Running I/O for 1 seconds...[2024-04-24 21:20:34.714839] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:11.919 [2024-04-24 21:20:34.714911] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679972 ] 00:05:11.919 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.919 [2024-04-24 21:20:34.786406] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:12.178 [2024-04-24 21:20:34.857000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.178 [2024-04-24 21:20:34.857093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.178 [2024-04-24 21:20:34.857157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.178 [2024-04-24 21:20:34.857159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.118 Running I/O for 1 seconds... 00:05:13.118 lcore 0: 206405 00:05:13.118 lcore 1: 206406 00:05:13.118 lcore 2: 206405 00:05:13.118 lcore 3: 206406 00:05:13.118 done. 00:05:13.118 00:05:13.118 real 0m1.248s 00:05:13.118 user 0m4.153s 00:05:13.118 sys 0m0.091s 00:05:13.118 21:20:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:13.118 21:20:35 -- common/autotest_common.sh@10 -- # set +x 00:05:13.118 ************************************ 00:05:13.118 END TEST event_perf 00:05:13.118 ************************************ 00:05:13.118 21:20:35 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:13.118 21:20:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:13.118 21:20:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.118 21:20:35 -- common/autotest_common.sh@10 -- # set +x 00:05:13.378 ************************************ 00:05:13.378 START TEST event_reactor 00:05:13.378 ************************************ 00:05:13.378 21:20:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:13.378 [2024-04-24 21:20:36.180673] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:13.378 [2024-04-24 21:20:36.180754] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2680268 ] 00:05:13.378 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.378 [2024-04-24 21:20:36.256003] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.638 [2024-04-24 21:20:36.327737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.581 test_start 00:05:14.582 oneshot 00:05:14.582 tick 100 00:05:14.582 tick 100 00:05:14.582 tick 250 00:05:14.582 tick 100 00:05:14.582 tick 100 00:05:14.582 tick 100 00:05:14.582 tick 250 00:05:14.582 tick 500 00:05:14.582 tick 100 00:05:14.582 tick 100 00:05:14.582 tick 250 00:05:14.582 tick 100 00:05:14.582 tick 100 00:05:14.582 test_end 00:05:14.582 00:05:14.582 real 0m1.251s 00:05:14.582 user 0m1.153s 00:05:14.582 sys 0m0.093s 00:05:14.582 21:20:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:14.582 21:20:37 -- common/autotest_common.sh@10 -- # set +x 00:05:14.582 ************************************ 00:05:14.582 END TEST event_reactor 00:05:14.582 ************************************ 00:05:14.582 21:20:37 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:14.582 21:20:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:14.582 21:20:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.582 21:20:37 -- common/autotest_common.sh@10 -- # set +x 00:05:14.841 ************************************ 00:05:14.842 START TEST event_reactor_perf 00:05:14.842 ************************************ 00:05:14.842 21:20:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:14.842 [2024-04-24 21:20:37.649746] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:14.842 [2024-04-24 21:20:37.649814] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2680556 ] 00:05:14.842 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.842 [2024-04-24 21:20:37.725524] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.101 [2024-04-24 21:20:37.801907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.040 test_start 00:05:16.040 test_end 00:05:16.040 Performance: 518776 events per second 00:05:16.040 00:05:16.040 real 0m1.259s 00:05:16.040 user 0m1.159s 00:05:16.040 sys 0m0.095s 00:05:16.040 21:20:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:16.040 21:20:38 -- common/autotest_common.sh@10 -- # set +x 00:05:16.040 ************************************ 00:05:16.040 END TEST event_reactor_perf 00:05:16.040 ************************************ 00:05:16.300 21:20:38 -- event/event.sh@49 -- # uname -s 00:05:16.300 21:20:38 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:16.300 21:20:38 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:16.300 21:20:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.300 21:20:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.300 21:20:38 -- common/autotest_common.sh@10 -- # set +x 00:05:16.300 ************************************ 00:05:16.300 START TEST event_scheduler 00:05:16.300 ************************************ 00:05:16.300 21:20:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:16.560 * Looking for test storage... 00:05:16.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:16.560 21:20:39 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:16.560 21:20:39 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2680880 00:05:16.560 21:20:39 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.560 21:20:39 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:16.560 21:20:39 -- scheduler/scheduler.sh@37 -- # waitforlisten 2680880 00:05:16.560 21:20:39 -- common/autotest_common.sh@817 -- # '[' -z 2680880 ']' 00:05:16.560 21:20:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.560 21:20:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:16.560 21:20:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.560 21:20:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:16.560 21:20:39 -- common/autotest_common.sh@10 -- # set +x 00:05:16.560 [2024-04-24 21:20:39.274085] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:16.560 [2024-04-24 21:20:39.274134] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2680880 ] 00:05:16.560 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.560 [2024-04-24 21:20:39.342322] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:16.560 [2024-04-24 21:20:39.412563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.560 [2024-04-24 21:20:39.412647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.560 [2024-04-24 21:20:39.412730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.560 [2024-04-24 21:20:39.412732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.496 21:20:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:17.496 21:20:40 -- common/autotest_common.sh@850 -- # return 0 00:05:17.496 21:20:40 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:17.496 21:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.496 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:05:17.496 POWER: Env isn't set yet! 00:05:17.496 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:17.496 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:17.496 POWER: Cannot set governor of lcore 0 to userspace 00:05:17.496 POWER: Attempting to initialise PSTAT power management... 00:05:17.496 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:17.496 POWER: Initialized successfully for lcore 0 power management 00:05:17.496 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:17.496 POWER: Initialized successfully for lcore 1 power management 00:05:17.496 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:17.496 POWER: Initialized successfully for lcore 2 power management 00:05:17.496 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:17.496 POWER: Initialized successfully for lcore 3 power management 00:05:17.496 21:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.496 21:20:40 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:17.496 21:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.496 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:05:17.496 [2024-04-24 21:20:40.197226] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:17.496 21:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.496 21:20:40 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:17.497 21:20:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.497 21:20:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.497 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:05:17.497 ************************************ 00:05:17.497 START TEST scheduler_create_thread 00:05:17.497 ************************************ 00:05:17.497 21:20:40 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:05:17.497 21:20:40 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:17.497 21:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.497 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:05:17.497 2 00:05:17.497 21:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.497 21:20:40 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:17.497 21:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.497 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:05:17.755 3 00:05:17.755 21:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.755 21:20:40 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:17.755 21:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.755 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:05:17.755 4 00:05:17.755 21:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.755 21:20:40 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:17.755 21:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.756 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:05:17.756 5 00:05:17.756 21:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.756 21:20:40 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:17.756 21:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.756 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:05:17.756 6 00:05:17.756 21:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.756 21:20:40 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:17.756 21:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.756 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:05:17.756 7 00:05:17.756 21:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.756 21:20:40 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:17.756 21:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.756 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:05:17.756 8 00:05:17.756 21:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.756 21:20:40 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:17.756 21:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.756 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:05:17.756 9 00:05:17.756 21:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.756 21:20:40 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:17.756 21:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.756 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:05:17.756 10 00:05:17.756 21:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.756 21:20:40 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:17.756 21:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.756 21:20:40 -- common/autotest_common.sh@10 -- # set +x 00:05:19.151 21:20:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:19.151 21:20:41 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:19.151 21:20:41 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:19.151 21:20:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:19.151 21:20:41 -- common/autotest_common.sh@10 -- # set +x 00:05:19.771 21:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:19.771 21:20:42 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:19.771 21:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:19.771 21:20:42 -- common/autotest_common.sh@10 -- # set +x 00:05:21.681 21:20:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:21.681 21:20:44 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:21.681 21:20:44 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:21.681 21:20:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:21.681 21:20:44 -- common/autotest_common.sh@10 -- # set +x 00:05:22.255 21:20:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:22.255 00:05:22.255 real 0m4.677s 00:05:22.255 user 0m0.028s 00:05:22.255 sys 0m0.003s 00:05:22.255 21:20:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.255 21:20:45 -- common/autotest_common.sh@10 -- # set +x 00:05:22.255 ************************************ 00:05:22.255 END TEST scheduler_create_thread 00:05:22.255 ************************************ 00:05:22.255 21:20:45 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:22.255 21:20:45 -- scheduler/scheduler.sh@46 -- # killprocess 2680880 00:05:22.255 21:20:45 -- common/autotest_common.sh@936 -- # '[' -z 2680880 ']' 00:05:22.255 21:20:45 -- common/autotest_common.sh@940 -- # kill -0 2680880 00:05:22.255 21:20:45 -- common/autotest_common.sh@941 -- # uname 00:05:22.255 21:20:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:22.255 21:20:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2680880 00:05:22.255 21:20:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:22.255 21:20:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:22.255 21:20:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2680880' 00:05:22.255 killing process with pid 2680880 00:05:22.255 21:20:45 -- common/autotest_common.sh@955 -- # kill 2680880 00:05:22.255 21:20:45 -- common/autotest_common.sh@960 -- # wait 2680880 00:05:22.824 [2024-04-24 21:20:45.415817] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:22.824 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:22.824 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:22.824 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:22.824 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:22.824 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:22.824 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:22.824 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:22.824 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:22.824 00:05:22.824 real 0m6.555s 00:05:22.824 user 0m12.033s 00:05:22.824 sys 0m0.537s 00:05:22.824 21:20:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.824 21:20:45 -- common/autotest_common.sh@10 -- # set +x 00:05:22.824 ************************************ 00:05:22.824 END TEST event_scheduler 00:05:22.824 ************************************ 00:05:22.824 21:20:45 -- event/event.sh@51 -- # modprobe -n nbd 00:05:23.084 21:20:45 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:23.084 21:20:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.084 21:20:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.084 21:20:45 -- common/autotest_common.sh@10 -- # set +x 00:05:23.084 ************************************ 00:05:23.084 START TEST app_repeat 00:05:23.084 ************************************ 00:05:23.084 21:20:45 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:05:23.084 21:20:45 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.084 21:20:45 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.084 21:20:45 -- event/event.sh@13 -- # local nbd_list 00:05:23.084 21:20:45 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.084 21:20:45 -- event/event.sh@14 -- # local bdev_list 00:05:23.084 21:20:45 -- event/event.sh@15 -- # local repeat_times=4 00:05:23.084 21:20:45 -- event/event.sh@17 -- # modprobe nbd 00:05:23.084 21:20:45 -- event/event.sh@19 -- # repeat_pid=2682020 00:05:23.084 21:20:45 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.084 21:20:45 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:23.084 21:20:45 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2682020' 00:05:23.084 Process app_repeat pid: 2682020 00:05:23.084 21:20:45 -- event/event.sh@23 -- # for i in {0..2} 00:05:23.084 21:20:45 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:23.084 spdk_app_start Round 0 00:05:23.084 21:20:45 -- event/event.sh@25 -- # waitforlisten 2682020 /var/tmp/spdk-nbd.sock 00:05:23.084 21:20:45 -- common/autotest_common.sh@817 -- # '[' -z 2682020 ']' 00:05:23.084 21:20:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.084 21:20:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:23.084 21:20:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.084 21:20:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:23.084 21:20:45 -- common/autotest_common.sh@10 -- # set +x 00:05:23.084 [2024-04-24 21:20:45.901618] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:23.084 [2024-04-24 21:20:45.901676] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2682020 ] 00:05:23.084 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.084 [2024-04-24 21:20:45.970419] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.343 [2024-04-24 21:20:46.044368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.343 [2024-04-24 21:20:46.044372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.913 21:20:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:23.913 21:20:46 -- common/autotest_common.sh@850 -- # return 0 00:05:23.913 21:20:46 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.172 Malloc0 00:05:24.172 21:20:46 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.172 Malloc1 00:05:24.431 21:20:47 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@12 -- # local i 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.431 /dev/nbd0 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.431 21:20:47 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:24.431 21:20:47 -- common/autotest_common.sh@855 -- # local i 00:05:24.431 21:20:47 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:24.431 21:20:47 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:24.431 21:20:47 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:24.431 21:20:47 -- common/autotest_common.sh@859 -- # break 00:05:24.431 21:20:47 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:24.431 21:20:47 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:24.431 21:20:47 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.431 1+0 records in 00:05:24.431 1+0 records out 00:05:24.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217668 s, 18.8 MB/s 00:05:24.431 21:20:47 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.431 21:20:47 -- common/autotest_common.sh@872 -- # size=4096 00:05:24.431 21:20:47 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.431 21:20:47 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:24.431 21:20:47 -- common/autotest_common.sh@875 -- # return 0 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.431 21:20:47 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.690 /dev/nbd1 00:05:24.690 21:20:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.690 21:20:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.690 21:20:47 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:24.690 21:20:47 -- common/autotest_common.sh@855 -- # local i 00:05:24.690 21:20:47 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:24.690 21:20:47 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:24.690 21:20:47 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:24.690 21:20:47 -- common/autotest_common.sh@859 -- # break 00:05:24.690 21:20:47 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:24.690 21:20:47 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:24.690 21:20:47 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.690 1+0 records in 00:05:24.690 1+0 records out 00:05:24.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240195 s, 17.1 MB/s 00:05:24.690 21:20:47 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.690 21:20:47 -- common/autotest_common.sh@872 -- # size=4096 00:05:24.690 21:20:47 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.691 21:20:47 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:24.691 21:20:47 -- common/autotest_common.sh@875 -- # return 0 00:05:24.691 21:20:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.691 21:20:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.691 21:20:47 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.691 21:20:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.691 21:20:47 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.950 { 00:05:24.950 "nbd_device": "/dev/nbd0", 00:05:24.950 "bdev_name": "Malloc0" 00:05:24.950 }, 00:05:24.950 { 00:05:24.950 "nbd_device": "/dev/nbd1", 00:05:24.950 "bdev_name": "Malloc1" 00:05:24.950 } 00:05:24.950 ]' 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.950 { 00:05:24.950 "nbd_device": "/dev/nbd0", 00:05:24.950 "bdev_name": "Malloc0" 00:05:24.950 }, 00:05:24.950 { 00:05:24.950 "nbd_device": "/dev/nbd1", 00:05:24.950 "bdev_name": "Malloc1" 00:05:24.950 } 00:05:24.950 ]' 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.950 /dev/nbd1' 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.950 /dev/nbd1' 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.950 256+0 records in 00:05:24.950 256+0 records out 00:05:24.950 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113671 s, 92.2 MB/s 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.950 256+0 records in 00:05:24.950 256+0 records out 00:05:24.950 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196853 s, 53.3 MB/s 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.950 256+0 records in 00:05:24.950 256+0 records out 00:05:24.950 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195994 s, 53.5 MB/s 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@51 -- # local i 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.950 21:20:47 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.210 21:20:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.210 21:20:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.210 21:20:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.210 21:20:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.210 21:20:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.210 21:20:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.210 21:20:47 -- bdev/nbd_common.sh@41 -- # break 00:05:25.210 21:20:47 -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.210 21:20:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.210 21:20:47 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.469 21:20:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.469 21:20:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.469 21:20:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.469 21:20:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.469 21:20:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.469 21:20:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.469 21:20:48 -- bdev/nbd_common.sh@41 -- # break 00:05:25.469 21:20:48 -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.469 21:20:48 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.469 21:20:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.469 21:20:48 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.728 21:20:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.728 21:20:48 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.728 21:20:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.728 21:20:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.728 21:20:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.728 21:20:48 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.728 21:20:48 -- bdev/nbd_common.sh@65 -- # true 00:05:25.728 21:20:48 -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.728 21:20:48 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.728 21:20:48 -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.728 21:20:48 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.728 21:20:48 -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.728 21:20:48 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.728 21:20:48 -- event/event.sh@35 -- # sleep 3 00:05:25.987 [2024-04-24 21:20:48.803262] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.988 [2024-04-24 21:20:48.865282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.988 [2024-04-24 21:20:48.865285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.248 [2024-04-24 21:20:48.906551] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.248 [2024-04-24 21:20:48.906596] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.789 21:20:51 -- event/event.sh@23 -- # for i in {0..2} 00:05:28.789 21:20:51 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:28.789 spdk_app_start Round 1 00:05:28.789 21:20:51 -- event/event.sh@25 -- # waitforlisten 2682020 /var/tmp/spdk-nbd.sock 00:05:28.789 21:20:51 -- common/autotest_common.sh@817 -- # '[' -z 2682020 ']' 00:05:28.789 21:20:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.789 21:20:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:28.789 21:20:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.789 21:20:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:28.789 21:20:51 -- common/autotest_common.sh@10 -- # set +x 00:05:29.048 21:20:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:29.048 21:20:51 -- common/autotest_common.sh@850 -- # return 0 00:05:29.048 21:20:51 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.048 Malloc0 00:05:29.308 21:20:51 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.308 Malloc1 00:05:29.308 21:20:52 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.308 21:20:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.308 21:20:52 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.308 21:20:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.308 21:20:52 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.308 21:20:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.309 21:20:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.309 21:20:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.309 21:20:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.309 21:20:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.309 21:20:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.309 21:20:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.309 21:20:52 -- bdev/nbd_common.sh@12 -- # local i 00:05:29.309 21:20:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.309 21:20:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.309 21:20:52 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.569 /dev/nbd0 00:05:29.569 21:20:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.569 21:20:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.569 21:20:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:29.569 21:20:52 -- common/autotest_common.sh@855 -- # local i 00:05:29.569 21:20:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:29.569 21:20:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:29.569 21:20:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:29.569 21:20:52 -- common/autotest_common.sh@859 -- # break 00:05:29.569 21:20:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:29.569 21:20:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:29.569 21:20:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.569 1+0 records in 00:05:29.569 1+0 records out 00:05:29.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000134541 s, 30.4 MB/s 00:05:29.569 21:20:52 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.569 21:20:52 -- common/autotest_common.sh@872 -- # size=4096 00:05:29.569 21:20:52 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.569 21:20:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:29.569 21:20:52 -- common/autotest_common.sh@875 -- # return 0 00:05:29.569 21:20:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.569 21:20:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.569 21:20:52 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:29.829 /dev/nbd1 00:05:29.829 21:20:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:29.829 21:20:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:29.829 21:20:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:29.829 21:20:52 -- common/autotest_common.sh@855 -- # local i 00:05:29.829 21:20:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:29.829 21:20:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:29.829 21:20:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:29.829 21:20:52 -- common/autotest_common.sh@859 -- # break 00:05:29.829 21:20:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:29.829 21:20:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:29.829 21:20:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.829 1+0 records in 00:05:29.829 1+0 records out 00:05:29.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260619 s, 15.7 MB/s 00:05:29.829 21:20:52 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.829 21:20:52 -- common/autotest_common.sh@872 -- # size=4096 00:05:29.829 21:20:52 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.829 21:20:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:29.829 21:20:52 -- common/autotest_common.sh@875 -- # return 0 00:05:29.829 21:20:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.829 21:20:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.829 21:20:52 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.829 21:20:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.829 21:20:52 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.829 21:20:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:29.829 { 00:05:29.829 "nbd_device": "/dev/nbd0", 00:05:29.829 "bdev_name": "Malloc0" 00:05:29.829 }, 00:05:29.829 { 00:05:29.829 "nbd_device": "/dev/nbd1", 00:05:29.829 "bdev_name": "Malloc1" 00:05:29.829 } 00:05:29.829 ]' 00:05:29.829 21:20:52 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:29.829 { 00:05:29.829 "nbd_device": "/dev/nbd0", 00:05:29.829 "bdev_name": "Malloc0" 00:05:29.829 }, 00:05:29.829 { 00:05:29.829 "nbd_device": "/dev/nbd1", 00:05:29.829 "bdev_name": "Malloc1" 00:05:29.829 } 00:05:29.829 ]' 00:05:29.829 21:20:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.088 /dev/nbd1' 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.088 /dev/nbd1' 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.088 256+0 records in 00:05:30.088 256+0 records out 00:05:30.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114026 s, 92.0 MB/s 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.088 256+0 records in 00:05:30.088 256+0 records out 00:05:30.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158311 s, 66.2 MB/s 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.088 256+0 records in 00:05:30.088 256+0 records out 00:05:30.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202111 s, 51.9 MB/s 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@51 -- # local i 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.088 21:20:52 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.351 21:20:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@41 -- # break 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@41 -- # break 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.352 21:20:53 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.613 21:20:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:30.613 21:20:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:30.613 21:20:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.613 21:20:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:30.613 21:20:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.613 21:20:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:30.613 21:20:53 -- bdev/nbd_common.sh@65 -- # true 00:05:30.613 21:20:53 -- bdev/nbd_common.sh@65 -- # count=0 00:05:30.613 21:20:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:30.613 21:20:53 -- bdev/nbd_common.sh@104 -- # count=0 00:05:30.613 21:20:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:30.613 21:20:53 -- bdev/nbd_common.sh@109 -- # return 0 00:05:30.613 21:20:53 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.872 21:20:53 -- event/event.sh@35 -- # sleep 3 00:05:31.138 [2024-04-24 21:20:53.837417] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.138 [2024-04-24 21:20:53.899756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.138 [2024-04-24 21:20:53.899758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.138 [2024-04-24 21:20:53.942143] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.138 [2024-04-24 21:20:53.942189] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:34.456 21:20:56 -- event/event.sh@23 -- # for i in {0..2} 00:05:34.456 21:20:56 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:34.456 spdk_app_start Round 2 00:05:34.456 21:20:56 -- event/event.sh@25 -- # waitforlisten 2682020 /var/tmp/spdk-nbd.sock 00:05:34.456 21:20:56 -- common/autotest_common.sh@817 -- # '[' -z 2682020 ']' 00:05:34.456 21:20:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.456 21:20:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:34.456 21:20:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.456 21:20:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:34.456 21:20:56 -- common/autotest_common.sh@10 -- # set +x 00:05:34.456 21:20:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:34.456 21:20:56 -- common/autotest_common.sh@850 -- # return 0 00:05:34.456 21:20:56 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.456 Malloc0 00:05:34.456 21:20:56 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.456 Malloc1 00:05:34.456 21:20:57 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.456 21:20:57 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.456 21:20:57 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.456 21:20:57 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:34.456 21:20:57 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.456 21:20:57 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:34.456 21:20:57 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.456 21:20:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.456 21:20:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.456 21:20:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:34.456 21:20:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.456 21:20:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:34.456 21:20:57 -- bdev/nbd_common.sh@12 -- # local i 00:05:34.456 21:20:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:34.456 21:20:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.456 21:20:57 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:34.456 /dev/nbd0 00:05:34.456 21:20:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:34.456 21:20:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:34.456 21:20:57 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:34.456 21:20:57 -- common/autotest_common.sh@855 -- # local i 00:05:34.456 21:20:57 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:34.456 21:20:57 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:34.456 21:20:57 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:34.456 21:20:57 -- common/autotest_common.sh@859 -- # break 00:05:34.456 21:20:57 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:34.456 21:20:57 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:34.456 21:20:57 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.716 1+0 records in 00:05:34.716 1+0 records out 00:05:34.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000121602 s, 33.7 MB/s 00:05:34.716 21:20:57 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.716 21:20:57 -- common/autotest_common.sh@872 -- # size=4096 00:05:34.716 21:20:57 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.716 21:20:57 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:34.716 21:20:57 -- common/autotest_common.sh@875 -- # return 0 00:05:34.716 21:20:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.716 21:20:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.716 21:20:57 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:34.716 /dev/nbd1 00:05:34.716 21:20:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:34.716 21:20:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:34.716 21:20:57 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:34.716 21:20:57 -- common/autotest_common.sh@855 -- # local i 00:05:34.716 21:20:57 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:34.716 21:20:57 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:34.716 21:20:57 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:34.716 21:20:57 -- common/autotest_common.sh@859 -- # break 00:05:34.716 21:20:57 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:34.716 21:20:57 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:34.716 21:20:57 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.716 1+0 records in 00:05:34.716 1+0 records out 00:05:34.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000128235 s, 31.9 MB/s 00:05:34.716 21:20:57 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.716 21:20:57 -- common/autotest_common.sh@872 -- # size=4096 00:05:34.716 21:20:57 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:34.716 21:20:57 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:34.716 21:20:57 -- common/autotest_common.sh@875 -- # return 0 00:05:34.716 21:20:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.716 21:20:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.716 21:20:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.716 21:20:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.716 21:20:57 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.975 { 00:05:34.975 "nbd_device": "/dev/nbd0", 00:05:34.975 "bdev_name": "Malloc0" 00:05:34.975 }, 00:05:34.975 { 00:05:34.975 "nbd_device": "/dev/nbd1", 00:05:34.975 "bdev_name": "Malloc1" 00:05:34.975 } 00:05:34.975 ]' 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.975 { 00:05:34.975 "nbd_device": "/dev/nbd0", 00:05:34.975 "bdev_name": "Malloc0" 00:05:34.975 }, 00:05:34.975 { 00:05:34.975 "nbd_device": "/dev/nbd1", 00:05:34.975 "bdev_name": "Malloc1" 00:05:34.975 } 00:05:34.975 ]' 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.975 /dev/nbd1' 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.975 /dev/nbd1' 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.975 256+0 records in 00:05:34.975 256+0 records out 00:05:34.975 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105739 s, 99.2 MB/s 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.975 256+0 records in 00:05:34.975 256+0 records out 00:05:34.975 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194208 s, 54.0 MB/s 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.975 256+0 records in 00:05:34.975 256+0 records out 00:05:34.975 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183292 s, 57.2 MB/s 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@51 -- # local i 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.975 21:20:57 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:35.234 21:20:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:35.234 21:20:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:35.234 21:20:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:35.234 21:20:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.234 21:20:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.234 21:20:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:35.234 21:20:58 -- bdev/nbd_common.sh@41 -- # break 00:05:35.234 21:20:58 -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.234 21:20:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.234 21:20:58 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:35.494 21:20:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:35.494 21:20:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:35.494 21:20:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:35.494 21:20:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.494 21:20:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.494 21:20:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:35.494 21:20:58 -- bdev/nbd_common.sh@41 -- # break 00:05:35.494 21:20:58 -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.494 21:20:58 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.494 21:20:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.494 21:20:58 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.753 21:20:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:35.753 21:20:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:35.753 21:20:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.753 21:20:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:35.753 21:20:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.753 21:20:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:35.753 21:20:58 -- bdev/nbd_common.sh@65 -- # true 00:05:35.753 21:20:58 -- bdev/nbd_common.sh@65 -- # count=0 00:05:35.753 21:20:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:35.753 21:20:58 -- bdev/nbd_common.sh@104 -- # count=0 00:05:35.753 21:20:58 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:35.753 21:20:58 -- bdev/nbd_common.sh@109 -- # return 0 00:05:35.753 21:20:58 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:36.011 21:20:58 -- event/event.sh@35 -- # sleep 3 00:05:36.011 [2024-04-24 21:20:58.868903] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.271 [2024-04-24 21:20:58.931039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.271 [2024-04-24 21:20:58.931041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.271 [2024-04-24 21:20:58.972263] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:36.271 [2024-04-24 21:20:58.972310] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.810 21:21:01 -- event/event.sh@38 -- # waitforlisten 2682020 /var/tmp/spdk-nbd.sock 00:05:38.810 21:21:01 -- common/autotest_common.sh@817 -- # '[' -z 2682020 ']' 00:05:38.810 21:21:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.810 21:21:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:38.810 21:21:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.810 21:21:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:38.810 21:21:01 -- common/autotest_common.sh@10 -- # set +x 00:05:39.070 21:21:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:39.070 21:21:01 -- common/autotest_common.sh@850 -- # return 0 00:05:39.070 21:21:01 -- event/event.sh@39 -- # killprocess 2682020 00:05:39.070 21:21:01 -- common/autotest_common.sh@936 -- # '[' -z 2682020 ']' 00:05:39.070 21:21:01 -- common/autotest_common.sh@940 -- # kill -0 2682020 00:05:39.070 21:21:01 -- common/autotest_common.sh@941 -- # uname 00:05:39.070 21:21:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:39.070 21:21:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2682020 00:05:39.070 21:21:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:39.070 21:21:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:39.070 21:21:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2682020' 00:05:39.070 killing process with pid 2682020 00:05:39.070 21:21:01 -- common/autotest_common.sh@955 -- # kill 2682020 00:05:39.070 21:21:01 -- common/autotest_common.sh@960 -- # wait 2682020 00:05:39.330 spdk_app_start is called in Round 0. 00:05:39.330 Shutdown signal received, stop current app iteration 00:05:39.330 Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 reinitialization... 00:05:39.330 spdk_app_start is called in Round 1. 00:05:39.330 Shutdown signal received, stop current app iteration 00:05:39.330 Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 reinitialization... 00:05:39.330 spdk_app_start is called in Round 2. 00:05:39.330 Shutdown signal received, stop current app iteration 00:05:39.330 Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 reinitialization... 00:05:39.330 spdk_app_start is called in Round 3. 00:05:39.330 Shutdown signal received, stop current app iteration 00:05:39.330 21:21:02 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:39.330 21:21:02 -- event/event.sh@42 -- # return 0 00:05:39.330 00:05:39.330 real 0m16.213s 00:05:39.330 user 0m34.383s 00:05:39.330 sys 0m2.926s 00:05:39.330 21:21:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.330 21:21:02 -- common/autotest_common.sh@10 -- # set +x 00:05:39.330 ************************************ 00:05:39.330 END TEST app_repeat 00:05:39.330 ************************************ 00:05:39.330 21:21:02 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:39.330 21:21:02 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:39.330 21:21:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.330 21:21:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.330 21:21:02 -- common/autotest_common.sh@10 -- # set +x 00:05:39.590 ************************************ 00:05:39.590 START TEST cpu_locks 00:05:39.590 ************************************ 00:05:39.590 21:21:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:39.590 * Looking for test storage... 00:05:39.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:39.590 21:21:02 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:39.590 21:21:02 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:39.590 21:21:02 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:39.590 21:21:02 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:39.590 21:21:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.590 21:21:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.590 21:21:02 -- common/autotest_common.sh@10 -- # set +x 00:05:39.850 ************************************ 00:05:39.850 START TEST default_locks 00:05:39.850 ************************************ 00:05:39.850 21:21:02 -- common/autotest_common.sh@1111 -- # default_locks 00:05:39.850 21:21:02 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2685310 00:05:39.850 21:21:02 -- event/cpu_locks.sh@47 -- # waitforlisten 2685310 00:05:39.850 21:21:02 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.850 21:21:02 -- common/autotest_common.sh@817 -- # '[' -z 2685310 ']' 00:05:39.850 21:21:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.850 21:21:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:39.850 21:21:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.850 21:21:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:39.850 21:21:02 -- common/autotest_common.sh@10 -- # set +x 00:05:39.850 [2024-04-24 21:21:02.576555] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:39.850 [2024-04-24 21:21:02.576600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2685310 ] 00:05:39.850 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.850 [2024-04-24 21:21:02.644371] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.850 [2024-04-24 21:21:02.717203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.787 21:21:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:40.787 21:21:03 -- common/autotest_common.sh@850 -- # return 0 00:05:40.787 21:21:03 -- event/cpu_locks.sh@49 -- # locks_exist 2685310 00:05:40.787 21:21:03 -- event/cpu_locks.sh@22 -- # lslocks -p 2685310 00:05:40.787 21:21:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.357 lslocks: write error 00:05:41.357 21:21:03 -- event/cpu_locks.sh@50 -- # killprocess 2685310 00:05:41.357 21:21:03 -- common/autotest_common.sh@936 -- # '[' -z 2685310 ']' 00:05:41.357 21:21:03 -- common/autotest_common.sh@940 -- # kill -0 2685310 00:05:41.357 21:21:03 -- common/autotest_common.sh@941 -- # uname 00:05:41.357 21:21:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:41.357 21:21:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2685310 00:05:41.357 21:21:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:41.357 21:21:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:41.357 21:21:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2685310' 00:05:41.357 killing process with pid 2685310 00:05:41.357 21:21:04 -- common/autotest_common.sh@955 -- # kill 2685310 00:05:41.357 21:21:04 -- common/autotest_common.sh@960 -- # wait 2685310 00:05:41.617 21:21:04 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2685310 00:05:41.617 21:21:04 -- common/autotest_common.sh@638 -- # local es=0 00:05:41.617 21:21:04 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2685310 00:05:41.617 21:21:04 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:41.617 21:21:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:41.617 21:21:04 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:41.617 21:21:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:41.617 21:21:04 -- common/autotest_common.sh@641 -- # waitforlisten 2685310 00:05:41.617 21:21:04 -- common/autotest_common.sh@817 -- # '[' -z 2685310 ']' 00:05:41.617 21:21:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.617 21:21:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:41.617 21:21:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.617 21:21:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:41.617 21:21:04 -- common/autotest_common.sh@10 -- # set +x 00:05:41.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2685310) - No such process 00:05:41.617 ERROR: process (pid: 2685310) is no longer running 00:05:41.617 21:21:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:41.617 21:21:04 -- common/autotest_common.sh@850 -- # return 1 00:05:41.617 21:21:04 -- common/autotest_common.sh@641 -- # es=1 00:05:41.617 21:21:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:41.617 21:21:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:41.617 21:21:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:41.617 21:21:04 -- event/cpu_locks.sh@54 -- # no_locks 00:05:41.617 21:21:04 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:41.617 21:21:04 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:41.617 21:21:04 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:41.617 00:05:41.617 real 0m1.873s 00:05:41.617 user 0m1.970s 00:05:41.617 sys 0m0.620s 00:05:41.617 21:21:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:41.617 21:21:04 -- common/autotest_common.sh@10 -- # set +x 00:05:41.617 ************************************ 00:05:41.617 END TEST default_locks 00:05:41.617 ************************************ 00:05:41.617 21:21:04 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:41.617 21:21:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.617 21:21:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.617 21:21:04 -- common/autotest_common.sh@10 -- # set +x 00:05:41.877 ************************************ 00:05:41.877 START TEST default_locks_via_rpc 00:05:41.877 ************************************ 00:05:41.877 21:21:04 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:05:41.877 21:21:04 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2685875 00:05:41.877 21:21:04 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.877 21:21:04 -- event/cpu_locks.sh@63 -- # waitforlisten 2685875 00:05:41.877 21:21:04 -- common/autotest_common.sh@817 -- # '[' -z 2685875 ']' 00:05:41.877 21:21:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.877 21:21:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:41.877 21:21:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.877 21:21:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:41.877 21:21:04 -- common/autotest_common.sh@10 -- # set +x 00:05:41.877 [2024-04-24 21:21:04.637211] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:41.877 [2024-04-24 21:21:04.637252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2685875 ] 00:05:41.877 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.877 [2024-04-24 21:21:04.706075] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.137 [2024-04-24 21:21:04.781385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.706 21:21:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:42.706 21:21:05 -- common/autotest_common.sh@850 -- # return 0 00:05:42.706 21:21:05 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:42.706 21:21:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:42.706 21:21:05 -- common/autotest_common.sh@10 -- # set +x 00:05:42.706 21:21:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.706 21:21:05 -- event/cpu_locks.sh@67 -- # no_locks 00:05:42.706 21:21:05 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:42.706 21:21:05 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:42.706 21:21:05 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:42.706 21:21:05 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:42.706 21:21:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:42.706 21:21:05 -- common/autotest_common.sh@10 -- # set +x 00:05:42.706 21:21:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:42.706 21:21:05 -- event/cpu_locks.sh@71 -- # locks_exist 2685875 00:05:42.706 21:21:05 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.706 21:21:05 -- event/cpu_locks.sh@22 -- # lslocks -p 2685875 00:05:42.966 21:21:05 -- event/cpu_locks.sh@73 -- # killprocess 2685875 00:05:42.966 21:21:05 -- common/autotest_common.sh@936 -- # '[' -z 2685875 ']' 00:05:42.966 21:21:05 -- common/autotest_common.sh@940 -- # kill -0 2685875 00:05:42.966 21:21:05 -- common/autotest_common.sh@941 -- # uname 00:05:42.966 21:21:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:42.966 21:21:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2685875 00:05:42.966 21:21:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:42.966 21:21:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:42.966 21:21:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2685875' 00:05:42.966 killing process with pid 2685875 00:05:42.966 21:21:05 -- common/autotest_common.sh@955 -- # kill 2685875 00:05:42.966 21:21:05 -- common/autotest_common.sh@960 -- # wait 2685875 00:05:43.534 00:05:43.534 real 0m1.539s 00:05:43.534 user 0m1.585s 00:05:43.534 sys 0m0.534s 00:05:43.534 21:21:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:43.534 21:21:06 -- common/autotest_common.sh@10 -- # set +x 00:05:43.534 ************************************ 00:05:43.534 END TEST default_locks_via_rpc 00:05:43.534 ************************************ 00:05:43.534 21:21:06 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:43.534 21:21:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.534 21:21:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.534 21:21:06 -- common/autotest_common.sh@10 -- # set +x 00:05:43.534 ************************************ 00:05:43.534 START TEST non_locking_app_on_locked_coremask 00:05:43.534 ************************************ 00:05:43.534 21:21:06 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:05:43.534 21:21:06 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2686529 00:05:43.534 21:21:06 -- event/cpu_locks.sh@81 -- # waitforlisten 2686529 /var/tmp/spdk.sock 00:05:43.534 21:21:06 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.534 21:21:06 -- common/autotest_common.sh@817 -- # '[' -z 2686529 ']' 00:05:43.534 21:21:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.534 21:21:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:43.534 21:21:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.535 21:21:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:43.535 21:21:06 -- common/autotest_common.sh@10 -- # set +x 00:05:43.535 [2024-04-24 21:21:06.358968] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:43.535 [2024-04-24 21:21:06.359014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2686529 ] 00:05:43.535 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.794 [2024-04-24 21:21:06.428670] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.794 [2024-04-24 21:21:06.502650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.365 21:21:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:44.365 21:21:07 -- common/autotest_common.sh@850 -- # return 0 00:05:44.365 21:21:07 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2686618 00:05:44.365 21:21:07 -- event/cpu_locks.sh@85 -- # waitforlisten 2686618 /var/tmp/spdk2.sock 00:05:44.365 21:21:07 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:44.365 21:21:07 -- common/autotest_common.sh@817 -- # '[' -z 2686618 ']' 00:05:44.365 21:21:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.365 21:21:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:44.365 21:21:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.365 21:21:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:44.365 21:21:07 -- common/autotest_common.sh@10 -- # set +x 00:05:44.365 [2024-04-24 21:21:07.190791] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:44.366 [2024-04-24 21:21:07.190843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2686618 ] 00:05:44.366 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.627 [2024-04-24 21:21:07.291780] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.627 [2024-04-24 21:21:07.291812] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.627 [2024-04-24 21:21:07.429142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.195 21:21:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:45.195 21:21:07 -- common/autotest_common.sh@850 -- # return 0 00:05:45.195 21:21:07 -- event/cpu_locks.sh@87 -- # locks_exist 2686529 00:05:45.195 21:21:07 -- event/cpu_locks.sh@22 -- # lslocks -p 2686529 00:05:45.195 21:21:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.575 lslocks: write error 00:05:46.575 21:21:09 -- event/cpu_locks.sh@89 -- # killprocess 2686529 00:05:46.575 21:21:09 -- common/autotest_common.sh@936 -- # '[' -z 2686529 ']' 00:05:46.575 21:21:09 -- common/autotest_common.sh@940 -- # kill -0 2686529 00:05:46.575 21:21:09 -- common/autotest_common.sh@941 -- # uname 00:05:46.575 21:21:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.575 21:21:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2686529 00:05:46.575 21:21:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.575 21:21:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.575 21:21:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2686529' 00:05:46.575 killing process with pid 2686529 00:05:46.575 21:21:09 -- common/autotest_common.sh@955 -- # kill 2686529 00:05:46.575 21:21:09 -- common/autotest_common.sh@960 -- # wait 2686529 00:05:47.143 21:21:09 -- event/cpu_locks.sh@90 -- # killprocess 2686618 00:05:47.143 21:21:09 -- common/autotest_common.sh@936 -- # '[' -z 2686618 ']' 00:05:47.143 21:21:09 -- common/autotest_common.sh@940 -- # kill -0 2686618 00:05:47.143 21:21:09 -- common/autotest_common.sh@941 -- # uname 00:05:47.144 21:21:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:47.144 21:21:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2686618 00:05:47.144 21:21:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:47.144 21:21:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:47.144 21:21:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2686618' 00:05:47.144 killing process with pid 2686618 00:05:47.144 21:21:09 -- common/autotest_common.sh@955 -- # kill 2686618 00:05:47.144 21:21:09 -- common/autotest_common.sh@960 -- # wait 2686618 00:05:47.403 00:05:47.403 real 0m3.871s 00:05:47.403 user 0m4.128s 00:05:47.403 sys 0m1.291s 00:05:47.403 21:21:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:47.403 21:21:10 -- common/autotest_common.sh@10 -- # set +x 00:05:47.403 ************************************ 00:05:47.403 END TEST non_locking_app_on_locked_coremask 00:05:47.403 ************************************ 00:05:47.403 21:21:10 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:47.403 21:21:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.403 21:21:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.403 21:21:10 -- common/autotest_common.sh@10 -- # set +x 00:05:47.662 ************************************ 00:05:47.662 START TEST locking_app_on_unlocked_coremask 00:05:47.662 ************************************ 00:05:47.662 21:21:10 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:05:47.662 21:21:10 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2687200 00:05:47.662 21:21:10 -- event/cpu_locks.sh@99 -- # waitforlisten 2687200 /var/tmp/spdk.sock 00:05:47.662 21:21:10 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:47.662 21:21:10 -- common/autotest_common.sh@817 -- # '[' -z 2687200 ']' 00:05:47.662 21:21:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.662 21:21:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:47.662 21:21:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.662 21:21:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:47.662 21:21:10 -- common/autotest_common.sh@10 -- # set +x 00:05:47.662 [2024-04-24 21:21:10.449894] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:47.662 [2024-04-24 21:21:10.449942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2687200 ] 00:05:47.662 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.662 [2024-04-24 21:21:10.521138] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.662 [2024-04-24 21:21:10.521165] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.921 [2024-04-24 21:21:10.589086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.489 21:21:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:48.489 21:21:11 -- common/autotest_common.sh@850 -- # return 0 00:05:48.489 21:21:11 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2687463 00:05:48.489 21:21:11 -- event/cpu_locks.sh@103 -- # waitforlisten 2687463 /var/tmp/spdk2.sock 00:05:48.489 21:21:11 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:48.489 21:21:11 -- common/autotest_common.sh@817 -- # '[' -z 2687463 ']' 00:05:48.489 21:21:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.489 21:21:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:48.489 21:21:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.489 21:21:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:48.489 21:21:11 -- common/autotest_common.sh@10 -- # set +x 00:05:48.489 [2024-04-24 21:21:11.288424] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:48.489 [2024-04-24 21:21:11.288484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2687463 ] 00:05:48.489 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.748 [2024-04-24 21:21:11.383148] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.748 [2024-04-24 21:21:11.523587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.317 21:21:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:49.317 21:21:12 -- common/autotest_common.sh@850 -- # return 0 00:05:49.317 21:21:12 -- event/cpu_locks.sh@105 -- # locks_exist 2687463 00:05:49.317 21:21:12 -- event/cpu_locks.sh@22 -- # lslocks -p 2687463 00:05:49.317 21:21:12 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.885 lslocks: write error 00:05:49.885 21:21:12 -- event/cpu_locks.sh@107 -- # killprocess 2687200 00:05:49.885 21:21:12 -- common/autotest_common.sh@936 -- # '[' -z 2687200 ']' 00:05:49.885 21:21:12 -- common/autotest_common.sh@940 -- # kill -0 2687200 00:05:49.885 21:21:12 -- common/autotest_common.sh@941 -- # uname 00:05:49.885 21:21:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:49.886 21:21:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2687200 00:05:50.145 21:21:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:50.145 21:21:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:50.145 21:21:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2687200' 00:05:50.145 killing process with pid 2687200 00:05:50.145 21:21:12 -- common/autotest_common.sh@955 -- # kill 2687200 00:05:50.145 21:21:12 -- common/autotest_common.sh@960 -- # wait 2687200 00:05:50.715 21:21:13 -- event/cpu_locks.sh@108 -- # killprocess 2687463 00:05:50.715 21:21:13 -- common/autotest_common.sh@936 -- # '[' -z 2687463 ']' 00:05:50.715 21:21:13 -- common/autotest_common.sh@940 -- # kill -0 2687463 00:05:50.715 21:21:13 -- common/autotest_common.sh@941 -- # uname 00:05:50.715 21:21:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:50.715 21:21:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2687463 00:05:50.715 21:21:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:50.715 21:21:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:50.715 21:21:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2687463' 00:05:50.715 killing process with pid 2687463 00:05:50.715 21:21:13 -- common/autotest_common.sh@955 -- # kill 2687463 00:05:50.715 21:21:13 -- common/autotest_common.sh@960 -- # wait 2687463 00:05:51.285 00:05:51.285 real 0m3.472s 00:05:51.285 user 0m3.676s 00:05:51.285 sys 0m1.083s 00:05:51.285 21:21:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:51.285 21:21:13 -- common/autotest_common.sh@10 -- # set +x 00:05:51.285 ************************************ 00:05:51.285 END TEST locking_app_on_unlocked_coremask 00:05:51.285 ************************************ 00:05:51.285 21:21:13 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:51.285 21:21:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.285 21:21:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.285 21:21:13 -- common/autotest_common.sh@10 -- # set +x 00:05:51.285 ************************************ 00:05:51.285 START TEST locking_app_on_locked_coremask 00:05:51.285 ************************************ 00:05:51.285 21:21:14 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:05:51.285 21:21:14 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2687984 00:05:51.285 21:21:14 -- event/cpu_locks.sh@116 -- # waitforlisten 2687984 /var/tmp/spdk.sock 00:05:51.285 21:21:14 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.285 21:21:14 -- common/autotest_common.sh@817 -- # '[' -z 2687984 ']' 00:05:51.285 21:21:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.285 21:21:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:51.285 21:21:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.285 21:21:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:51.285 21:21:14 -- common/autotest_common.sh@10 -- # set +x 00:05:51.285 [2024-04-24 21:21:14.117436] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:51.285 [2024-04-24 21:21:14.117488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2687984 ] 00:05:51.285 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.544 [2024-04-24 21:21:14.185636] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.545 [2024-04-24 21:21:14.258678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.114 21:21:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:52.114 21:21:14 -- common/autotest_common.sh@850 -- # return 0 00:05:52.114 21:21:14 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.114 21:21:14 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2688042 00:05:52.114 21:21:14 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2688042 /var/tmp/spdk2.sock 00:05:52.114 21:21:14 -- common/autotest_common.sh@638 -- # local es=0 00:05:52.114 21:21:14 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2688042 /var/tmp/spdk2.sock 00:05:52.114 21:21:14 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:52.114 21:21:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:52.114 21:21:14 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:52.114 21:21:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:52.114 21:21:14 -- common/autotest_common.sh@641 -- # waitforlisten 2688042 /var/tmp/spdk2.sock 00:05:52.114 21:21:14 -- common/autotest_common.sh@817 -- # '[' -z 2688042 ']' 00:05:52.114 21:21:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.114 21:21:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:52.114 21:21:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.114 21:21:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:52.114 21:21:14 -- common/autotest_common.sh@10 -- # set +x 00:05:52.114 [2024-04-24 21:21:14.936730] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:52.114 [2024-04-24 21:21:14.936780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688042 ] 00:05:52.114 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.373 [2024-04-24 21:21:15.028609] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2687984 has claimed it. 00:05:52.373 [2024-04-24 21:21:15.028642] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:52.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2688042) - No such process 00:05:52.942 ERROR: process (pid: 2688042) is no longer running 00:05:52.942 21:21:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:52.942 21:21:15 -- common/autotest_common.sh@850 -- # return 1 00:05:52.942 21:21:15 -- common/autotest_common.sh@641 -- # es=1 00:05:52.942 21:21:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:52.942 21:21:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:52.942 21:21:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:52.942 21:21:15 -- event/cpu_locks.sh@122 -- # locks_exist 2687984 00:05:52.942 21:21:15 -- event/cpu_locks.sh@22 -- # lslocks -p 2687984 00:05:52.942 21:21:15 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.511 lslocks: write error 00:05:53.511 21:21:16 -- event/cpu_locks.sh@124 -- # killprocess 2687984 00:05:53.511 21:21:16 -- common/autotest_common.sh@936 -- # '[' -z 2687984 ']' 00:05:53.511 21:21:16 -- common/autotest_common.sh@940 -- # kill -0 2687984 00:05:53.511 21:21:16 -- common/autotest_common.sh@941 -- # uname 00:05:53.511 21:21:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:53.511 21:21:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2687984 00:05:53.511 21:21:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:53.511 21:21:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:53.511 21:21:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2687984' 00:05:53.511 killing process with pid 2687984 00:05:53.511 21:21:16 -- common/autotest_common.sh@955 -- # kill 2687984 00:05:53.511 21:21:16 -- common/autotest_common.sh@960 -- # wait 2687984 00:05:53.770 00:05:53.770 real 0m2.571s 00:05:53.770 user 0m2.819s 00:05:53.770 sys 0m0.806s 00:05:53.770 21:21:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:53.770 21:21:16 -- common/autotest_common.sh@10 -- # set +x 00:05:53.770 ************************************ 00:05:53.770 END TEST locking_app_on_locked_coremask 00:05:53.770 ************************************ 00:05:54.031 21:21:16 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:54.031 21:21:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.031 21:21:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.031 21:21:16 -- common/autotest_common.sh@10 -- # set +x 00:05:54.031 ************************************ 00:05:54.031 START TEST locking_overlapped_coremask 00:05:54.031 ************************************ 00:05:54.031 21:21:16 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:05:54.031 21:21:16 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2688413 00:05:54.031 21:21:16 -- event/cpu_locks.sh@133 -- # waitforlisten 2688413 /var/tmp/spdk.sock 00:05:54.031 21:21:16 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:54.031 21:21:16 -- common/autotest_common.sh@817 -- # '[' -z 2688413 ']' 00:05:54.031 21:21:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.031 21:21:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:54.031 21:21:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.031 21:21:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:54.031 21:21:16 -- common/autotest_common.sh@10 -- # set +x 00:05:54.031 [2024-04-24 21:21:16.891669] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:54.031 [2024-04-24 21:21:16.891722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688413 ] 00:05:54.291 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.291 [2024-04-24 21:21:16.963214] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.291 [2024-04-24 21:21:17.038405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.291 [2024-04-24 21:21:17.038506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.291 [2024-04-24 21:21:17.038509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.861 21:21:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:54.861 21:21:17 -- common/autotest_common.sh@850 -- # return 0 00:05:54.861 21:21:17 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2688617 00:05:54.861 21:21:17 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2688617 /var/tmp/spdk2.sock 00:05:54.861 21:21:17 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:54.861 21:21:17 -- common/autotest_common.sh@638 -- # local es=0 00:05:54.861 21:21:17 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2688617 /var/tmp/spdk2.sock 00:05:54.861 21:21:17 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:54.861 21:21:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:54.861 21:21:17 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:54.861 21:21:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:54.861 21:21:17 -- common/autotest_common.sh@641 -- # waitforlisten 2688617 /var/tmp/spdk2.sock 00:05:54.861 21:21:17 -- common/autotest_common.sh@817 -- # '[' -z 2688617 ']' 00:05:54.861 21:21:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.861 21:21:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:54.861 21:21:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.861 21:21:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:54.861 21:21:17 -- common/autotest_common.sh@10 -- # set +x 00:05:54.861 [2024-04-24 21:21:17.747079] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:54.861 [2024-04-24 21:21:17.747131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688617 ] 00:05:55.120 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.120 [2024-04-24 21:21:17.846709] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2688413 has claimed it. 00:05:55.120 [2024-04-24 21:21:17.846748] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2688617) - No such process 00:05:55.722 ERROR: process (pid: 2688617) is no longer running 00:05:55.722 21:21:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:55.722 21:21:18 -- common/autotest_common.sh@850 -- # return 1 00:05:55.722 21:21:18 -- common/autotest_common.sh@641 -- # es=1 00:05:55.722 21:21:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:55.722 21:21:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:55.722 21:21:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:55.722 21:21:18 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:55.722 21:21:18 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:55.723 21:21:18 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:55.723 21:21:18 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:55.723 21:21:18 -- event/cpu_locks.sh@141 -- # killprocess 2688413 00:05:55.723 21:21:18 -- common/autotest_common.sh@936 -- # '[' -z 2688413 ']' 00:05:55.723 21:21:18 -- common/autotest_common.sh@940 -- # kill -0 2688413 00:05:55.723 21:21:18 -- common/autotest_common.sh@941 -- # uname 00:05:55.723 21:21:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:55.723 21:21:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2688413 00:05:55.723 21:21:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:55.723 21:21:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:55.723 21:21:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2688413' 00:05:55.723 killing process with pid 2688413 00:05:55.723 21:21:18 -- common/autotest_common.sh@955 -- # kill 2688413 00:05:55.723 21:21:18 -- common/autotest_common.sh@960 -- # wait 2688413 00:05:55.986 00:05:55.986 real 0m1.928s 00:05:55.986 user 0m5.354s 00:05:55.986 sys 0m0.464s 00:05:55.986 21:21:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.986 21:21:18 -- common/autotest_common.sh@10 -- # set +x 00:05:55.986 ************************************ 00:05:55.986 END TEST locking_overlapped_coremask 00:05:55.986 ************************************ 00:05:55.986 21:21:18 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:55.986 21:21:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.986 21:21:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.986 21:21:18 -- common/autotest_common.sh@10 -- # set +x 00:05:56.245 ************************************ 00:05:56.245 START TEST locking_overlapped_coremask_via_rpc 00:05:56.245 ************************************ 00:05:56.245 21:21:18 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:05:56.245 21:21:18 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2688918 00:05:56.245 21:21:18 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:56.245 21:21:18 -- event/cpu_locks.sh@149 -- # waitforlisten 2688918 /var/tmp/spdk.sock 00:05:56.245 21:21:18 -- common/autotest_common.sh@817 -- # '[' -z 2688918 ']' 00:05:56.245 21:21:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.245 21:21:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:56.246 21:21:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.246 21:21:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:56.246 21:21:18 -- common/autotest_common.sh@10 -- # set +x 00:05:56.246 [2024-04-24 21:21:19.015333] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:56.246 [2024-04-24 21:21:19.015378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688918 ] 00:05:56.246 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.246 [2024-04-24 21:21:19.084922] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.246 [2024-04-24 21:21:19.084946] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.505 [2024-04-24 21:21:19.160811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.505 [2024-04-24 21:21:19.160904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.505 [2024-04-24 21:21:19.160906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.074 21:21:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:57.074 21:21:19 -- common/autotest_common.sh@850 -- # return 0 00:05:57.074 21:21:19 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2688939 00:05:57.074 21:21:19 -- event/cpu_locks.sh@153 -- # waitforlisten 2688939 /var/tmp/spdk2.sock 00:05:57.074 21:21:19 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:57.074 21:21:19 -- common/autotest_common.sh@817 -- # '[' -z 2688939 ']' 00:05:57.074 21:21:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.074 21:21:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:57.074 21:21:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.074 21:21:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:57.074 21:21:19 -- common/autotest_common.sh@10 -- # set +x 00:05:57.074 [2024-04-24 21:21:19.871148] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:57.074 [2024-04-24 21:21:19.871197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688939 ] 00:05:57.074 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.333 [2024-04-24 21:21:19.971617] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.333 [2024-04-24 21:21:19.971645] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.333 [2024-04-24 21:21:20.140705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.334 [2024-04-24 21:21:20.140821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.334 [2024-04-24 21:21:20.140822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:57.902 21:21:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:57.902 21:21:20 -- common/autotest_common.sh@850 -- # return 0 00:05:57.902 21:21:20 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:57.902 21:21:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:57.902 21:21:20 -- common/autotest_common.sh@10 -- # set +x 00:05:57.902 21:21:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:57.902 21:21:20 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.902 21:21:20 -- common/autotest_common.sh@638 -- # local es=0 00:05:57.902 21:21:20 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.902 21:21:20 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:57.902 21:21:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:57.902 21:21:20 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:57.902 21:21:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:57.902 21:21:20 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.902 21:21:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:57.902 21:21:20 -- common/autotest_common.sh@10 -- # set +x 00:05:57.902 [2024-04-24 21:21:20.676521] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2688918 has claimed it. 00:05:57.902 request: 00:05:57.902 { 00:05:57.902 "method": "framework_enable_cpumask_locks", 00:05:57.902 "req_id": 1 00:05:57.902 } 00:05:57.902 Got JSON-RPC error response 00:05:57.902 response: 00:05:57.902 { 00:05:57.902 "code": -32603, 00:05:57.902 "message": "Failed to claim CPU core: 2" 00:05:57.902 } 00:05:57.902 21:21:20 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:57.902 21:21:20 -- common/autotest_common.sh@641 -- # es=1 00:05:57.902 21:21:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:57.902 21:21:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:57.902 21:21:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:57.902 21:21:20 -- event/cpu_locks.sh@158 -- # waitforlisten 2688918 /var/tmp/spdk.sock 00:05:57.902 21:21:20 -- common/autotest_common.sh@817 -- # '[' -z 2688918 ']' 00:05:57.902 21:21:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.902 21:21:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:57.902 21:21:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.902 21:21:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:57.902 21:21:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.162 21:21:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:58.162 21:21:20 -- common/autotest_common.sh@850 -- # return 0 00:05:58.162 21:21:20 -- event/cpu_locks.sh@159 -- # waitforlisten 2688939 /var/tmp/spdk2.sock 00:05:58.162 21:21:20 -- common/autotest_common.sh@817 -- # '[' -z 2688939 ']' 00:05:58.162 21:21:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.162 21:21:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:58.162 21:21:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.162 21:21:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:58.162 21:21:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.162 21:21:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:58.162 21:21:21 -- common/autotest_common.sh@850 -- # return 0 00:05:58.162 21:21:21 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:58.162 21:21:21 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.162 21:21:21 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.162 21:21:21 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.162 00:05:58.162 real 0m2.086s 00:05:58.162 user 0m0.811s 00:05:58.162 sys 0m0.212s 00:05:58.162 21:21:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.162 21:21:21 -- common/autotest_common.sh@10 -- # set +x 00:05:58.162 ************************************ 00:05:58.162 END TEST locking_overlapped_coremask_via_rpc 00:05:58.162 ************************************ 00:05:58.421 21:21:21 -- event/cpu_locks.sh@174 -- # cleanup 00:05:58.421 21:21:21 -- event/cpu_locks.sh@15 -- # [[ -z 2688918 ]] 00:05:58.421 21:21:21 -- event/cpu_locks.sh@15 -- # killprocess 2688918 00:05:58.421 21:21:21 -- common/autotest_common.sh@936 -- # '[' -z 2688918 ']' 00:05:58.421 21:21:21 -- common/autotest_common.sh@940 -- # kill -0 2688918 00:05:58.421 21:21:21 -- common/autotest_common.sh@941 -- # uname 00:05:58.422 21:21:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:58.422 21:21:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2688918 00:05:58.422 21:21:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:58.422 21:21:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:58.422 21:21:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2688918' 00:05:58.422 killing process with pid 2688918 00:05:58.422 21:21:21 -- common/autotest_common.sh@955 -- # kill 2688918 00:05:58.422 21:21:21 -- common/autotest_common.sh@960 -- # wait 2688918 00:05:58.681 21:21:21 -- event/cpu_locks.sh@16 -- # [[ -z 2688939 ]] 00:05:58.681 21:21:21 -- event/cpu_locks.sh@16 -- # killprocess 2688939 00:05:58.682 21:21:21 -- common/autotest_common.sh@936 -- # '[' -z 2688939 ']' 00:05:58.682 21:21:21 -- common/autotest_common.sh@940 -- # kill -0 2688939 00:05:58.682 21:21:21 -- common/autotest_common.sh@941 -- # uname 00:05:58.682 21:21:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:58.682 21:21:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2688939 00:05:58.682 21:21:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:58.682 21:21:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:58.682 21:21:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2688939' 00:05:58.682 killing process with pid 2688939 00:05:58.682 21:21:21 -- common/autotest_common.sh@955 -- # kill 2688939 00:05:58.682 21:21:21 -- common/autotest_common.sh@960 -- # wait 2688939 00:05:59.252 21:21:21 -- event/cpu_locks.sh@18 -- # rm -f 00:05:59.252 21:21:21 -- event/cpu_locks.sh@1 -- # cleanup 00:05:59.252 21:21:21 -- event/cpu_locks.sh@15 -- # [[ -z 2688918 ]] 00:05:59.252 21:21:21 -- event/cpu_locks.sh@15 -- # killprocess 2688918 00:05:59.252 21:21:21 -- common/autotest_common.sh@936 -- # '[' -z 2688918 ']' 00:05:59.252 21:21:21 -- common/autotest_common.sh@940 -- # kill -0 2688918 00:05:59.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2688918) - No such process 00:05:59.252 21:21:21 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2688918 is not found' 00:05:59.252 Process with pid 2688918 is not found 00:05:59.252 21:21:21 -- event/cpu_locks.sh@16 -- # [[ -z 2688939 ]] 00:05:59.252 21:21:21 -- event/cpu_locks.sh@16 -- # killprocess 2688939 00:05:59.252 21:21:21 -- common/autotest_common.sh@936 -- # '[' -z 2688939 ']' 00:05:59.252 21:21:21 -- common/autotest_common.sh@940 -- # kill -0 2688939 00:05:59.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2688939) - No such process 00:05:59.252 21:21:21 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2688939 is not found' 00:05:59.252 Process with pid 2688939 is not found 00:05:59.252 21:21:21 -- event/cpu_locks.sh@18 -- # rm -f 00:05:59.252 00:05:59.252 real 0m19.622s 00:05:59.252 user 0m31.186s 00:05:59.252 sys 0m6.454s 00:05:59.252 21:21:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.252 21:21:21 -- common/autotest_common.sh@10 -- # set +x 00:05:59.252 ************************************ 00:05:59.252 END TEST cpu_locks 00:05:59.252 ************************************ 00:05:59.252 00:05:59.252 real 0m47.520s 00:05:59.252 user 1m24.529s 00:05:59.252 sys 0m11.010s 00:05:59.252 21:21:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.252 21:21:21 -- common/autotest_common.sh@10 -- # set +x 00:05:59.252 ************************************ 00:05:59.252 END TEST event 00:05:59.252 ************************************ 00:05:59.252 21:21:21 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:59.252 21:21:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.252 21:21:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.252 21:21:21 -- common/autotest_common.sh@10 -- # set +x 00:05:59.512 ************************************ 00:05:59.512 START TEST thread 00:05:59.512 ************************************ 00:05:59.512 21:21:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:59.512 * Looking for test storage... 00:05:59.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:59.512 21:21:22 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:59.512 21:21:22 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:59.512 21:21:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.512 21:21:22 -- common/autotest_common.sh@10 -- # set +x 00:05:59.772 ************************************ 00:05:59.772 START TEST thread_poller_perf 00:05:59.772 ************************************ 00:05:59.772 21:21:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:59.772 [2024-04-24 21:21:22.455770] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:05:59.772 [2024-04-24 21:21:22.455849] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689579 ] 00:05:59.772 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.772 [2024-04-24 21:21:22.526953] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.772 [2024-04-24 21:21:22.594339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.772 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:01.166 ====================================== 00:06:01.166 busy:2508056572 (cyc) 00:06:01.166 total_run_count: 429000 00:06:01.166 tsc_hz: 2500000000 (cyc) 00:06:01.166 ====================================== 00:06:01.166 poller_cost: 5846 (cyc), 2338 (nsec) 00:06:01.166 00:06:01.166 real 0m1.249s 00:06:01.166 user 0m1.155s 00:06:01.166 sys 0m0.089s 00:06:01.166 21:21:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.166 21:21:23 -- common/autotest_common.sh@10 -- # set +x 00:06:01.166 ************************************ 00:06:01.166 END TEST thread_poller_perf 00:06:01.166 ************************************ 00:06:01.166 21:21:23 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:01.166 21:21:23 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:01.166 21:21:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.166 21:21:23 -- common/autotest_common.sh@10 -- # set +x 00:06:01.166 ************************************ 00:06:01.166 START TEST thread_poller_perf 00:06:01.166 ************************************ 00:06:01.166 21:21:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:01.166 [2024-04-24 21:21:23.912648] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:01.166 [2024-04-24 21:21:23.912741] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689866 ] 00:06:01.166 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.166 [2024-04-24 21:21:23.984865] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.426 [2024-04-24 21:21:24.059769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.426 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:02.365 ====================================== 00:06:02.365 busy:2501966686 (cyc) 00:06:02.365 total_run_count: 5632000 00:06:02.365 tsc_hz: 2500000000 (cyc) 00:06:02.365 ====================================== 00:06:02.365 poller_cost: 444 (cyc), 177 (nsec) 00:06:02.365 00:06:02.365 real 0m1.252s 00:06:02.365 user 0m1.162s 00:06:02.365 sys 0m0.086s 00:06:02.365 21:21:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.365 21:21:25 -- common/autotest_common.sh@10 -- # set +x 00:06:02.365 ************************************ 00:06:02.365 END TEST thread_poller_perf 00:06:02.365 ************************************ 00:06:02.365 21:21:25 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:02.365 00:06:02.365 real 0m3.027s 00:06:02.365 user 0m2.508s 00:06:02.365 sys 0m0.483s 00:06:02.365 21:21:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.365 21:21:25 -- common/autotest_common.sh@10 -- # set +x 00:06:02.365 ************************************ 00:06:02.365 END TEST thread 00:06:02.365 ************************************ 00:06:02.365 21:21:25 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:02.365 21:21:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.365 21:21:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.365 21:21:25 -- common/autotest_common.sh@10 -- # set +x 00:06:02.624 ************************************ 00:06:02.624 START TEST accel 00:06:02.624 ************************************ 00:06:02.624 21:21:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:02.624 * Looking for test storage... 00:06:02.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:02.624 21:21:25 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:02.625 21:21:25 -- accel/accel.sh@82 -- # get_expected_opcs 00:06:02.625 21:21:25 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:02.625 21:21:25 -- accel/accel.sh@62 -- # spdk_tgt_pid=2690205 00:06:02.625 21:21:25 -- accel/accel.sh@63 -- # waitforlisten 2690205 00:06:02.625 21:21:25 -- common/autotest_common.sh@817 -- # '[' -z 2690205 ']' 00:06:02.625 21:21:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.625 21:21:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:02.625 21:21:25 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:02.625 21:21:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.625 21:21:25 -- accel/accel.sh@61 -- # build_accel_config 00:06:02.625 21:21:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:02.625 21:21:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.625 21:21:25 -- common/autotest_common.sh@10 -- # set +x 00:06:02.625 21:21:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.625 21:21:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.625 21:21:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.625 21:21:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.625 21:21:25 -- accel/accel.sh@40 -- # local IFS=, 00:06:02.625 21:21:25 -- accel/accel.sh@41 -- # jq -r . 00:06:02.885 [2024-04-24 21:21:25.550849] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:02.885 [2024-04-24 21:21:25.550902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690205 ] 00:06:02.885 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.885 [2024-04-24 21:21:25.619308] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.885 [2024-04-24 21:21:25.688124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.454 21:21:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:03.455 21:21:26 -- common/autotest_common.sh@850 -- # return 0 00:06:03.455 21:21:26 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:03.455 21:21:26 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:03.455 21:21:26 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:03.455 21:21:26 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:03.455 21:21:26 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:03.455 21:21:26 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:03.455 21:21:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:03.455 21:21:26 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:03.455 21:21:26 -- common/autotest_common.sh@10 -- # set +x 00:06:03.715 21:21:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:03.715 21:21:26 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # IFS== 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # read -r opc module 00:06:03.715 21:21:26 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.715 21:21:26 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # IFS== 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # read -r opc module 00:06:03.715 21:21:26 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.715 21:21:26 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # IFS== 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # read -r opc module 00:06:03.715 21:21:26 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.715 21:21:26 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # IFS== 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # read -r opc module 00:06:03.715 21:21:26 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.715 21:21:26 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # IFS== 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # read -r opc module 00:06:03.715 21:21:26 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.715 21:21:26 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # IFS== 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # read -r opc module 00:06:03.715 21:21:26 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.715 21:21:26 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # IFS== 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # read -r opc module 00:06:03.715 21:21:26 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.715 21:21:26 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # IFS== 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # read -r opc module 00:06:03.715 21:21:26 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.715 21:21:26 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # IFS== 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # read -r opc module 00:06:03.715 21:21:26 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.715 21:21:26 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # IFS== 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # read -r opc module 00:06:03.715 21:21:26 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.715 21:21:26 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # IFS== 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # read -r opc module 00:06:03.715 21:21:26 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.715 21:21:26 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # IFS== 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # read -r opc module 00:06:03.715 21:21:26 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.715 21:21:26 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # IFS== 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # read -r opc module 00:06:03.715 21:21:26 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.715 21:21:26 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # IFS== 00:06:03.715 21:21:26 -- accel/accel.sh@72 -- # read -r opc module 00:06:03.715 21:21:26 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.715 21:21:26 -- accel/accel.sh@75 -- # killprocess 2690205 00:06:03.715 21:21:26 -- common/autotest_common.sh@936 -- # '[' -z 2690205 ']' 00:06:03.715 21:21:26 -- common/autotest_common.sh@940 -- # kill -0 2690205 00:06:03.715 21:21:26 -- common/autotest_common.sh@941 -- # uname 00:06:03.715 21:21:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:03.715 21:21:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2690205 00:06:03.715 21:21:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:03.715 21:21:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:03.715 21:21:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2690205' 00:06:03.715 killing process with pid 2690205 00:06:03.715 21:21:26 -- common/autotest_common.sh@955 -- # kill 2690205 00:06:03.715 21:21:26 -- common/autotest_common.sh@960 -- # wait 2690205 00:06:03.975 21:21:26 -- accel/accel.sh@76 -- # trap - ERR 00:06:03.975 21:21:26 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:03.975 21:21:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:03.975 21:21:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.975 21:21:26 -- common/autotest_common.sh@10 -- # set +x 00:06:04.235 21:21:26 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:06:04.235 21:21:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:04.235 21:21:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.235 21:21:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.235 21:21:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.235 21:21:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.235 21:21:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.235 21:21:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.235 21:21:26 -- accel/accel.sh@40 -- # local IFS=, 00:06:04.235 21:21:26 -- accel/accel.sh@41 -- # jq -r . 00:06:04.235 21:21:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.235 21:21:26 -- common/autotest_common.sh@10 -- # set +x 00:06:04.235 21:21:26 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:04.235 21:21:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:04.235 21:21:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.235 21:21:26 -- common/autotest_common.sh@10 -- # set +x 00:06:04.235 ************************************ 00:06:04.235 START TEST accel_missing_filename 00:06:04.235 ************************************ 00:06:04.235 21:21:27 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:06:04.235 21:21:27 -- common/autotest_common.sh@638 -- # local es=0 00:06:04.235 21:21:27 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:04.235 21:21:27 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:04.235 21:21:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:04.235 21:21:27 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:04.235 21:21:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:04.235 21:21:27 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:06:04.235 21:21:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:04.235 21:21:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.235 21:21:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.235 21:21:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.235 21:21:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.235 21:21:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.235 21:21:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.235 21:21:27 -- accel/accel.sh@40 -- # local IFS=, 00:06:04.235 21:21:27 -- accel/accel.sh@41 -- # jq -r . 00:06:04.495 [2024-04-24 21:21:27.141616] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:04.495 [2024-04-24 21:21:27.141699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690521 ] 00:06:04.495 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.495 [2024-04-24 21:21:27.214234] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.495 [2024-04-24 21:21:27.284352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.495 [2024-04-24 21:21:27.325191] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:04.755 [2024-04-24 21:21:27.385223] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:04.755 A filename is required. 00:06:04.755 21:21:27 -- common/autotest_common.sh@641 -- # es=234 00:06:04.755 21:21:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:04.755 21:21:27 -- common/autotest_common.sh@650 -- # es=106 00:06:04.755 21:21:27 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:04.755 21:21:27 -- common/autotest_common.sh@658 -- # es=1 00:06:04.755 21:21:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:04.755 00:06:04.755 real 0m0.366s 00:06:04.755 user 0m0.270s 00:06:04.755 sys 0m0.135s 00:06:04.755 21:21:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.755 21:21:27 -- common/autotest_common.sh@10 -- # set +x 00:06:04.755 ************************************ 00:06:04.755 END TEST accel_missing_filename 00:06:04.755 ************************************ 00:06:04.755 21:21:27 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:04.755 21:21:27 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:04.755 21:21:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.755 21:21:27 -- common/autotest_common.sh@10 -- # set +x 00:06:05.015 ************************************ 00:06:05.015 START TEST accel_compress_verify 00:06:05.015 ************************************ 00:06:05.015 21:21:27 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:05.015 21:21:27 -- common/autotest_common.sh@638 -- # local es=0 00:06:05.015 21:21:27 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:05.015 21:21:27 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:05.015 21:21:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:05.015 21:21:27 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:05.015 21:21:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:05.015 21:21:27 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:05.015 21:21:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:05.015 21:21:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.015 21:21:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.015 21:21:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.015 21:21:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.015 21:21:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.015 21:21:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.015 21:21:27 -- accel/accel.sh@40 -- # local IFS=, 00:06:05.015 21:21:27 -- accel/accel.sh@41 -- # jq -r . 00:06:05.015 [2024-04-24 21:21:27.688888] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:05.016 [2024-04-24 21:21:27.688937] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690602 ] 00:06:05.016 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.016 [2024-04-24 21:21:27.755236] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.016 [2024-04-24 21:21:27.826895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.016 [2024-04-24 21:21:27.868105] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.276 [2024-04-24 21:21:27.928412] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:05.276 00:06:05.276 Compression does not support the verify option, aborting. 00:06:05.276 21:21:28 -- common/autotest_common.sh@641 -- # es=161 00:06:05.276 21:21:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:05.276 21:21:28 -- common/autotest_common.sh@650 -- # es=33 00:06:05.276 21:21:28 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:05.276 21:21:28 -- common/autotest_common.sh@658 -- # es=1 00:06:05.276 21:21:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:05.276 00:06:05.276 real 0m0.351s 00:06:05.276 user 0m0.257s 00:06:05.276 sys 0m0.131s 00:06:05.276 21:21:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.276 21:21:28 -- common/autotest_common.sh@10 -- # set +x 00:06:05.276 ************************************ 00:06:05.276 END TEST accel_compress_verify 00:06:05.276 ************************************ 00:06:05.276 21:21:28 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:05.276 21:21:28 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:05.276 21:21:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.276 21:21:28 -- common/autotest_common.sh@10 -- # set +x 00:06:05.536 ************************************ 00:06:05.536 START TEST accel_wrong_workload 00:06:05.536 ************************************ 00:06:05.536 21:21:28 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:06:05.536 21:21:28 -- common/autotest_common.sh@638 -- # local es=0 00:06:05.536 21:21:28 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:05.536 21:21:28 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:05.536 21:21:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:05.536 21:21:28 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:05.536 21:21:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:05.536 21:21:28 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:06:05.536 21:21:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:05.536 21:21:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.536 21:21:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.536 21:21:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.536 21:21:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.536 21:21:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.536 21:21:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.536 21:21:28 -- accel/accel.sh@40 -- # local IFS=, 00:06:05.536 21:21:28 -- accel/accel.sh@41 -- # jq -r . 00:06:05.536 Unsupported workload type: foobar 00:06:05.536 [2024-04-24 21:21:28.245466] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:05.536 accel_perf options: 00:06:05.536 [-h help message] 00:06:05.536 [-q queue depth per core] 00:06:05.536 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:05.536 [-T number of threads per core 00:06:05.536 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:05.536 [-t time in seconds] 00:06:05.536 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:05.536 [ dif_verify, , dif_generate, dif_generate_copy 00:06:05.536 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:05.536 [-l for compress/decompress workloads, name of uncompressed input file 00:06:05.536 [-S for crc32c workload, use this seed value (default 0) 00:06:05.536 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:05.536 [-f for fill workload, use this BYTE value (default 255) 00:06:05.536 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:05.536 [-y verify result if this switch is on] 00:06:05.536 [-a tasks to allocate per core (default: same value as -q)] 00:06:05.536 Can be used to spread operations across a wider range of memory. 00:06:05.536 21:21:28 -- common/autotest_common.sh@641 -- # es=1 00:06:05.536 21:21:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:05.536 21:21:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:05.536 21:21:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:05.536 00:06:05.536 real 0m0.035s 00:06:05.536 user 0m0.020s 00:06:05.536 sys 0m0.016s 00:06:05.536 21:21:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.536 21:21:28 -- common/autotest_common.sh@10 -- # set +x 00:06:05.536 ************************************ 00:06:05.536 END TEST accel_wrong_workload 00:06:05.536 ************************************ 00:06:05.536 Error: writing output failed: Broken pipe 00:06:05.536 21:21:28 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:05.536 21:21:28 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:05.536 21:21:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.536 21:21:28 -- common/autotest_common.sh@10 -- # set +x 00:06:05.797 ************************************ 00:06:05.797 START TEST accel_negative_buffers 00:06:05.797 ************************************ 00:06:05.797 21:21:28 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:05.797 21:21:28 -- common/autotest_common.sh@638 -- # local es=0 00:06:05.797 21:21:28 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:05.797 21:21:28 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:05.797 21:21:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:05.797 21:21:28 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:05.797 21:21:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:05.797 21:21:28 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:06:05.797 21:21:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:05.797 21:21:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.797 21:21:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.797 21:21:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.797 21:21:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.797 21:21:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.797 21:21:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.797 21:21:28 -- accel/accel.sh@40 -- # local IFS=, 00:06:05.797 21:21:28 -- accel/accel.sh@41 -- # jq -r . 00:06:05.797 -x option must be non-negative. 00:06:05.797 [2024-04-24 21:21:28.459534] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:05.797 accel_perf options: 00:06:05.797 [-h help message] 00:06:05.797 [-q queue depth per core] 00:06:05.797 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:05.797 [-T number of threads per core 00:06:05.797 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:05.797 [-t time in seconds] 00:06:05.797 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:05.797 [ dif_verify, , dif_generate, dif_generate_copy 00:06:05.797 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:05.797 [-l for compress/decompress workloads, name of uncompressed input file 00:06:05.797 [-S for crc32c workload, use this seed value (default 0) 00:06:05.797 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:05.797 [-f for fill workload, use this BYTE value (default 255) 00:06:05.797 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:05.797 [-y verify result if this switch is on] 00:06:05.797 [-a tasks to allocate per core (default: same value as -q)] 00:06:05.797 Can be used to spread operations across a wider range of memory. 00:06:05.797 21:21:28 -- common/autotest_common.sh@641 -- # es=1 00:06:05.797 21:21:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:05.797 21:21:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:05.797 21:21:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:05.797 00:06:05.797 real 0m0.035s 00:06:05.797 user 0m0.017s 00:06:05.797 sys 0m0.017s 00:06:05.797 21:21:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.797 21:21:28 -- common/autotest_common.sh@10 -- # set +x 00:06:05.797 ************************************ 00:06:05.797 END TEST accel_negative_buffers 00:06:05.797 ************************************ 00:06:05.797 Error: writing output failed: Broken pipe 00:06:05.797 21:21:28 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:05.797 21:21:28 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:05.797 21:21:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.797 21:21:28 -- common/autotest_common.sh@10 -- # set +x 00:06:05.797 ************************************ 00:06:05.797 START TEST accel_crc32c 00:06:05.797 ************************************ 00:06:05.797 21:21:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:05.797 21:21:28 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.797 21:21:28 -- accel/accel.sh@17 -- # local accel_module 00:06:05.797 21:21:28 -- accel/accel.sh@19 -- # IFS=: 00:06:05.797 21:21:28 -- accel/accel.sh@19 -- # read -r var val 00:06:05.797 21:21:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:05.797 21:21:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:05.797 21:21:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.797 21:21:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.797 21:21:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.797 21:21:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.797 21:21:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.797 21:21:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.797 21:21:28 -- accel/accel.sh@40 -- # local IFS=, 00:06:05.797 21:21:28 -- accel/accel.sh@41 -- # jq -r . 00:06:06.057 [2024-04-24 21:21:28.693492] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:06.057 [2024-04-24 21:21:28.693554] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690895 ] 00:06:06.057 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.057 [2024-04-24 21:21:28.765151] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.057 [2024-04-24 21:21:28.835945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.057 21:21:28 -- accel/accel.sh@20 -- # val= 00:06:06.057 21:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # IFS=: 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # read -r var val 00:06:06.057 21:21:28 -- accel/accel.sh@20 -- # val= 00:06:06.057 21:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # IFS=: 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # read -r var val 00:06:06.057 21:21:28 -- accel/accel.sh@20 -- # val=0x1 00:06:06.057 21:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # IFS=: 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # read -r var val 00:06:06.057 21:21:28 -- accel/accel.sh@20 -- # val= 00:06:06.057 21:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # IFS=: 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # read -r var val 00:06:06.057 21:21:28 -- accel/accel.sh@20 -- # val= 00:06:06.057 21:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # IFS=: 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # read -r var val 00:06:06.057 21:21:28 -- accel/accel.sh@20 -- # val=crc32c 00:06:06.057 21:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.057 21:21:28 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # IFS=: 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # read -r var val 00:06:06.057 21:21:28 -- accel/accel.sh@20 -- # val=32 00:06:06.057 21:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # IFS=: 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # read -r var val 00:06:06.057 21:21:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.057 21:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # IFS=: 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # read -r var val 00:06:06.057 21:21:28 -- accel/accel.sh@20 -- # val= 00:06:06.057 21:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # IFS=: 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # read -r var val 00:06:06.057 21:21:28 -- accel/accel.sh@20 -- # val=software 00:06:06.057 21:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.057 21:21:28 -- accel/accel.sh@22 -- # accel_module=software 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # IFS=: 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # read -r var val 00:06:06.057 21:21:28 -- accel/accel.sh@20 -- # val=32 00:06:06.057 21:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # IFS=: 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # read -r var val 00:06:06.057 21:21:28 -- accel/accel.sh@20 -- # val=32 00:06:06.057 21:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # IFS=: 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # read -r var val 00:06:06.057 21:21:28 -- accel/accel.sh@20 -- # val=1 00:06:06.057 21:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # IFS=: 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # read -r var val 00:06:06.057 21:21:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.057 21:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # IFS=: 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # read -r var val 00:06:06.057 21:21:28 -- accel/accel.sh@20 -- # val=Yes 00:06:06.057 21:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # IFS=: 00:06:06.057 21:21:28 -- accel/accel.sh@19 -- # read -r var val 00:06:06.058 21:21:28 -- accel/accel.sh@20 -- # val= 00:06:06.058 21:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.058 21:21:28 -- accel/accel.sh@19 -- # IFS=: 00:06:06.058 21:21:28 -- accel/accel.sh@19 -- # read -r var val 00:06:06.058 21:21:28 -- accel/accel.sh@20 -- # val= 00:06:06.058 21:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.058 21:21:28 -- accel/accel.sh@19 -- # IFS=: 00:06:06.058 21:21:28 -- accel/accel.sh@19 -- # read -r var val 00:06:07.446 21:21:30 -- accel/accel.sh@20 -- # val= 00:06:07.446 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.446 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.446 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.446 21:21:30 -- accel/accel.sh@20 -- # val= 00:06:07.446 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.446 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.446 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.446 21:21:30 -- accel/accel.sh@20 -- # val= 00:06:07.446 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.446 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.446 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.446 21:21:30 -- accel/accel.sh@20 -- # val= 00:06:07.446 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.446 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.446 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.446 21:21:30 -- accel/accel.sh@20 -- # val= 00:06:07.446 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.446 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.446 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.446 21:21:30 -- accel/accel.sh@20 -- # val= 00:06:07.446 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.446 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.446 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.446 21:21:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.446 21:21:30 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:07.446 21:21:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.446 00:06:07.446 real 0m1.372s 00:06:07.446 user 0m1.259s 00:06:07.446 sys 0m0.128s 00:06:07.446 21:21:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.446 21:21:30 -- common/autotest_common.sh@10 -- # set +x 00:06:07.446 ************************************ 00:06:07.446 END TEST accel_crc32c 00:06:07.446 ************************************ 00:06:07.446 21:21:30 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:07.446 21:21:30 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:07.446 21:21:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.446 21:21:30 -- common/autotest_common.sh@10 -- # set +x 00:06:07.446 ************************************ 00:06:07.446 START TEST accel_crc32c_C2 00:06:07.446 ************************************ 00:06:07.446 21:21:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:07.446 21:21:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:07.446 21:21:30 -- accel/accel.sh@17 -- # local accel_module 00:06:07.446 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.446 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.446 21:21:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:07.446 21:21:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:07.446 21:21:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.446 21:21:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.446 21:21:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.446 21:21:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.446 21:21:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.446 21:21:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.446 21:21:30 -- accel/accel.sh@40 -- # local IFS=, 00:06:07.446 21:21:30 -- accel/accel.sh@41 -- # jq -r . 00:06:07.446 [2024-04-24 21:21:30.264105] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:07.446 [2024-04-24 21:21:30.264164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2691186 ] 00:06:07.446 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.705 [2024-04-24 21:21:30.337608] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.705 [2024-04-24 21:21:30.410424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.705 21:21:30 -- accel/accel.sh@20 -- # val= 00:06:07.705 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.705 21:21:30 -- accel/accel.sh@20 -- # val= 00:06:07.705 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.705 21:21:30 -- accel/accel.sh@20 -- # val=0x1 00:06:07.705 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.705 21:21:30 -- accel/accel.sh@20 -- # val= 00:06:07.705 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.705 21:21:30 -- accel/accel.sh@20 -- # val= 00:06:07.705 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.705 21:21:30 -- accel/accel.sh@20 -- # val=crc32c 00:06:07.705 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.705 21:21:30 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.705 21:21:30 -- accel/accel.sh@20 -- # val=0 00:06:07.705 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.705 21:21:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.705 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.705 21:21:30 -- accel/accel.sh@20 -- # val= 00:06:07.705 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.705 21:21:30 -- accel/accel.sh@20 -- # val=software 00:06:07.705 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.705 21:21:30 -- accel/accel.sh@22 -- # accel_module=software 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.705 21:21:30 -- accel/accel.sh@20 -- # val=32 00:06:07.705 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.705 21:21:30 -- accel/accel.sh@20 -- # val=32 00:06:07.705 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.705 21:21:30 -- accel/accel.sh@20 -- # val=1 00:06:07.705 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.705 21:21:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.705 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.705 21:21:30 -- accel/accel.sh@20 -- # val=Yes 00:06:07.705 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.705 21:21:30 -- accel/accel.sh@20 -- # val= 00:06:07.705 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.705 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:07.706 21:21:30 -- accel/accel.sh@20 -- # val= 00:06:07.706 21:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.706 21:21:30 -- accel/accel.sh@19 -- # IFS=: 00:06:07.706 21:21:30 -- accel/accel.sh@19 -- # read -r var val 00:06:09.087 21:21:31 -- accel/accel.sh@20 -- # val= 00:06:09.087 21:21:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.087 21:21:31 -- accel/accel.sh@19 -- # IFS=: 00:06:09.087 21:21:31 -- accel/accel.sh@19 -- # read -r var val 00:06:09.087 21:21:31 -- accel/accel.sh@20 -- # val= 00:06:09.087 21:21:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.087 21:21:31 -- accel/accel.sh@19 -- # IFS=: 00:06:09.087 21:21:31 -- accel/accel.sh@19 -- # read -r var val 00:06:09.087 21:21:31 -- accel/accel.sh@20 -- # val= 00:06:09.087 21:21:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.087 21:21:31 -- accel/accel.sh@19 -- # IFS=: 00:06:09.087 21:21:31 -- accel/accel.sh@19 -- # read -r var val 00:06:09.087 21:21:31 -- accel/accel.sh@20 -- # val= 00:06:09.087 21:21:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.087 21:21:31 -- accel/accel.sh@19 -- # IFS=: 00:06:09.087 21:21:31 -- accel/accel.sh@19 -- # read -r var val 00:06:09.087 21:21:31 -- accel/accel.sh@20 -- # val= 00:06:09.087 21:21:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.087 21:21:31 -- accel/accel.sh@19 -- # IFS=: 00:06:09.087 21:21:31 -- accel/accel.sh@19 -- # read -r var val 00:06:09.087 21:21:31 -- accel/accel.sh@20 -- # val= 00:06:09.087 21:21:31 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.087 21:21:31 -- accel/accel.sh@19 -- # IFS=: 00:06:09.087 21:21:31 -- accel/accel.sh@19 -- # read -r var val 00:06:09.087 21:21:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.087 21:21:31 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:09.087 21:21:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.087 00:06:09.087 real 0m1.375s 00:06:09.087 user 0m1.258s 00:06:09.087 sys 0m0.129s 00:06:09.087 21:21:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:09.087 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:06:09.087 ************************************ 00:06:09.087 END TEST accel_crc32c_C2 00:06:09.087 ************************************ 00:06:09.087 21:21:31 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:09.087 21:21:31 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:09.087 21:21:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.087 21:21:31 -- common/autotest_common.sh@10 -- # set +x 00:06:09.087 ************************************ 00:06:09.087 START TEST accel_copy 00:06:09.087 ************************************ 00:06:09.087 21:21:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:06:09.087 21:21:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.087 21:21:31 -- accel/accel.sh@17 -- # local accel_module 00:06:09.087 21:21:31 -- accel/accel.sh@19 -- # IFS=: 00:06:09.087 21:21:31 -- accel/accel.sh@19 -- # read -r var val 00:06:09.087 21:21:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:09.088 21:21:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:09.088 21:21:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.088 21:21:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.088 21:21:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.088 21:21:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.088 21:21:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.088 21:21:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.088 21:21:31 -- accel/accel.sh@40 -- # local IFS=, 00:06:09.088 21:21:31 -- accel/accel.sh@41 -- # jq -r . 00:06:09.088 [2024-04-24 21:21:31.833701] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:09.088 [2024-04-24 21:21:31.833764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2691480 ] 00:06:09.088 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.088 [2024-04-24 21:21:31.906119] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.347 [2024-04-24 21:21:31.978440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.347 21:21:32 -- accel/accel.sh@20 -- # val= 00:06:09.347 21:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.347 21:21:32 -- accel/accel.sh@19 -- # IFS=: 00:06:09.347 21:21:32 -- accel/accel.sh@19 -- # read -r var val 00:06:09.348 21:21:32 -- accel/accel.sh@20 -- # val= 00:06:09.348 21:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # IFS=: 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # read -r var val 00:06:09.348 21:21:32 -- accel/accel.sh@20 -- # val=0x1 00:06:09.348 21:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # IFS=: 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # read -r var val 00:06:09.348 21:21:32 -- accel/accel.sh@20 -- # val= 00:06:09.348 21:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # IFS=: 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # read -r var val 00:06:09.348 21:21:32 -- accel/accel.sh@20 -- # val= 00:06:09.348 21:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # IFS=: 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # read -r var val 00:06:09.348 21:21:32 -- accel/accel.sh@20 -- # val=copy 00:06:09.348 21:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.348 21:21:32 -- accel/accel.sh@23 -- # accel_opc=copy 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # IFS=: 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # read -r var val 00:06:09.348 21:21:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.348 21:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # IFS=: 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # read -r var val 00:06:09.348 21:21:32 -- accel/accel.sh@20 -- # val= 00:06:09.348 21:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # IFS=: 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # read -r var val 00:06:09.348 21:21:32 -- accel/accel.sh@20 -- # val=software 00:06:09.348 21:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.348 21:21:32 -- accel/accel.sh@22 -- # accel_module=software 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # IFS=: 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # read -r var val 00:06:09.348 21:21:32 -- accel/accel.sh@20 -- # val=32 00:06:09.348 21:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # IFS=: 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # read -r var val 00:06:09.348 21:21:32 -- accel/accel.sh@20 -- # val=32 00:06:09.348 21:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # IFS=: 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # read -r var val 00:06:09.348 21:21:32 -- accel/accel.sh@20 -- # val=1 00:06:09.348 21:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # IFS=: 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # read -r var val 00:06:09.348 21:21:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.348 21:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # IFS=: 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # read -r var val 00:06:09.348 21:21:32 -- accel/accel.sh@20 -- # val=Yes 00:06:09.348 21:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # IFS=: 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # read -r var val 00:06:09.348 21:21:32 -- accel/accel.sh@20 -- # val= 00:06:09.348 21:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # IFS=: 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # read -r var val 00:06:09.348 21:21:32 -- accel/accel.sh@20 -- # val= 00:06:09.348 21:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # IFS=: 00:06:09.348 21:21:32 -- accel/accel.sh@19 -- # read -r var val 00:06:10.286 21:21:33 -- accel/accel.sh@20 -- # val= 00:06:10.287 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.287 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.287 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.287 21:21:33 -- accel/accel.sh@20 -- # val= 00:06:10.287 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.287 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.287 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.287 21:21:33 -- accel/accel.sh@20 -- # val= 00:06:10.287 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.287 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.287 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.287 21:21:33 -- accel/accel.sh@20 -- # val= 00:06:10.287 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.287 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.287 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.287 21:21:33 -- accel/accel.sh@20 -- # val= 00:06:10.287 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.287 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.287 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.287 21:21:33 -- accel/accel.sh@20 -- # val= 00:06:10.287 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.287 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.287 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.287 21:21:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.287 21:21:33 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:10.287 21:21:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.287 00:06:10.287 real 0m1.368s 00:06:10.287 user 0m1.249s 00:06:10.287 sys 0m0.131s 00:06:10.287 21:21:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.287 21:21:33 -- common/autotest_common.sh@10 -- # set +x 00:06:10.287 ************************************ 00:06:10.287 END TEST accel_copy 00:06:10.287 ************************************ 00:06:10.546 21:21:33 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:10.546 21:21:33 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:10.546 21:21:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.546 21:21:33 -- common/autotest_common.sh@10 -- # set +x 00:06:10.546 ************************************ 00:06:10.546 START TEST accel_fill 00:06:10.546 ************************************ 00:06:10.546 21:21:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:10.546 21:21:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.546 21:21:33 -- accel/accel.sh@17 -- # local accel_module 00:06:10.546 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.546 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.546 21:21:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:10.546 21:21:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:10.546 21:21:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.546 21:21:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.546 21:21:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.546 21:21:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.546 21:21:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.546 21:21:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.546 21:21:33 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.546 21:21:33 -- accel/accel.sh@41 -- # jq -r . 00:06:10.546 [2024-04-24 21:21:33.388234] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:10.546 [2024-04-24 21:21:33.388287] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2691776 ] 00:06:10.546 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.805 [2024-04-24 21:21:33.457383] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.806 [2024-04-24 21:21:33.525394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.806 21:21:33 -- accel/accel.sh@20 -- # val= 00:06:10.806 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.806 21:21:33 -- accel/accel.sh@20 -- # val= 00:06:10.806 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.806 21:21:33 -- accel/accel.sh@20 -- # val=0x1 00:06:10.806 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.806 21:21:33 -- accel/accel.sh@20 -- # val= 00:06:10.806 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.806 21:21:33 -- accel/accel.sh@20 -- # val= 00:06:10.806 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.806 21:21:33 -- accel/accel.sh@20 -- # val=fill 00:06:10.806 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.806 21:21:33 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.806 21:21:33 -- accel/accel.sh@20 -- # val=0x80 00:06:10.806 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.806 21:21:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.806 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.806 21:21:33 -- accel/accel.sh@20 -- # val= 00:06:10.806 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.806 21:21:33 -- accel/accel.sh@20 -- # val=software 00:06:10.806 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.806 21:21:33 -- accel/accel.sh@22 -- # accel_module=software 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.806 21:21:33 -- accel/accel.sh@20 -- # val=64 00:06:10.806 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.806 21:21:33 -- accel/accel.sh@20 -- # val=64 00:06:10.806 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.806 21:21:33 -- accel/accel.sh@20 -- # val=1 00:06:10.806 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.806 21:21:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.806 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.806 21:21:33 -- accel/accel.sh@20 -- # val=Yes 00:06:10.806 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.806 21:21:33 -- accel/accel.sh@20 -- # val= 00:06:10.806 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:10.806 21:21:33 -- accel/accel.sh@20 -- # val= 00:06:10.806 21:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # IFS=: 00:06:10.806 21:21:33 -- accel/accel.sh@19 -- # read -r var val 00:06:12.186 21:21:34 -- accel/accel.sh@20 -- # val= 00:06:12.186 21:21:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.186 21:21:34 -- accel/accel.sh@19 -- # IFS=: 00:06:12.186 21:21:34 -- accel/accel.sh@19 -- # read -r var val 00:06:12.186 21:21:34 -- accel/accel.sh@20 -- # val= 00:06:12.186 21:21:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.186 21:21:34 -- accel/accel.sh@19 -- # IFS=: 00:06:12.186 21:21:34 -- accel/accel.sh@19 -- # read -r var val 00:06:12.186 21:21:34 -- accel/accel.sh@20 -- # val= 00:06:12.186 21:21:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.186 21:21:34 -- accel/accel.sh@19 -- # IFS=: 00:06:12.186 21:21:34 -- accel/accel.sh@19 -- # read -r var val 00:06:12.186 21:21:34 -- accel/accel.sh@20 -- # val= 00:06:12.186 21:21:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.186 21:21:34 -- accel/accel.sh@19 -- # IFS=: 00:06:12.186 21:21:34 -- accel/accel.sh@19 -- # read -r var val 00:06:12.186 21:21:34 -- accel/accel.sh@20 -- # val= 00:06:12.186 21:21:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.186 21:21:34 -- accel/accel.sh@19 -- # IFS=: 00:06:12.186 21:21:34 -- accel/accel.sh@19 -- # read -r var val 00:06:12.186 21:21:34 -- accel/accel.sh@20 -- # val= 00:06:12.186 21:21:34 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.186 21:21:34 -- accel/accel.sh@19 -- # IFS=: 00:06:12.186 21:21:34 -- accel/accel.sh@19 -- # read -r var val 00:06:12.186 21:21:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.186 21:21:34 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:12.186 21:21:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.186 00:06:12.186 real 0m1.362s 00:06:12.186 user 0m1.250s 00:06:12.186 sys 0m0.125s 00:06:12.186 21:21:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.186 21:21:34 -- common/autotest_common.sh@10 -- # set +x 00:06:12.186 ************************************ 00:06:12.186 END TEST accel_fill 00:06:12.186 ************************************ 00:06:12.186 21:21:34 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:12.186 21:21:34 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:12.186 21:21:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.186 21:21:34 -- common/autotest_common.sh@10 -- # set +x 00:06:12.186 ************************************ 00:06:12.186 START TEST accel_copy_crc32c 00:06:12.186 ************************************ 00:06:12.186 21:21:34 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:06:12.186 21:21:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.186 21:21:34 -- accel/accel.sh@17 -- # local accel_module 00:06:12.186 21:21:34 -- accel/accel.sh@19 -- # IFS=: 00:06:12.186 21:21:34 -- accel/accel.sh@19 -- # read -r var val 00:06:12.186 21:21:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:12.186 21:21:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:12.186 21:21:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.186 21:21:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.186 21:21:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.186 21:21:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.186 21:21:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.186 21:21:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.186 21:21:34 -- accel/accel.sh@40 -- # local IFS=, 00:06:12.186 21:21:34 -- accel/accel.sh@41 -- # jq -r . 00:06:12.186 [2024-04-24 21:21:34.947424] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:12.186 [2024-04-24 21:21:34.947494] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2692066 ] 00:06:12.186 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.186 [2024-04-24 21:21:35.018811] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.453 [2024-04-24 21:21:35.091613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.453 21:21:35 -- accel/accel.sh@20 -- # val= 00:06:12.453 21:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.453 21:21:35 -- accel/accel.sh@19 -- # IFS=: 00:06:12.453 21:21:35 -- accel/accel.sh@19 -- # read -r var val 00:06:12.453 21:21:35 -- accel/accel.sh@20 -- # val= 00:06:12.453 21:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.453 21:21:35 -- accel/accel.sh@19 -- # IFS=: 00:06:12.453 21:21:35 -- accel/accel.sh@19 -- # read -r var val 00:06:12.453 21:21:35 -- accel/accel.sh@20 -- # val=0x1 00:06:12.453 21:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.453 21:21:35 -- accel/accel.sh@19 -- # IFS=: 00:06:12.453 21:21:35 -- accel/accel.sh@19 -- # read -r var val 00:06:12.453 21:21:35 -- accel/accel.sh@20 -- # val= 00:06:12.453 21:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.453 21:21:35 -- accel/accel.sh@19 -- # IFS=: 00:06:12.453 21:21:35 -- accel/accel.sh@19 -- # read -r var val 00:06:12.453 21:21:35 -- accel/accel.sh@20 -- # val= 00:06:12.453 21:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.453 21:21:35 -- accel/accel.sh@19 -- # IFS=: 00:06:12.453 21:21:35 -- accel/accel.sh@19 -- # read -r var val 00:06:12.453 21:21:35 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:12.453 21:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.453 21:21:35 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:12.453 21:21:35 -- accel/accel.sh@19 -- # IFS=: 00:06:12.453 21:21:35 -- accel/accel.sh@19 -- # read -r var val 00:06:12.453 21:21:35 -- accel/accel.sh@20 -- # val=0 00:06:12.453 21:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.453 21:21:35 -- accel/accel.sh@19 -- # IFS=: 00:06:12.453 21:21:35 -- accel/accel.sh@19 -- # read -r var val 00:06:12.453 21:21:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.454 21:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # IFS=: 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # read -r var val 00:06:12.454 21:21:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.454 21:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # IFS=: 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # read -r var val 00:06:12.454 21:21:35 -- accel/accel.sh@20 -- # val= 00:06:12.454 21:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # IFS=: 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # read -r var val 00:06:12.454 21:21:35 -- accel/accel.sh@20 -- # val=software 00:06:12.454 21:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.454 21:21:35 -- accel/accel.sh@22 -- # accel_module=software 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # IFS=: 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # read -r var val 00:06:12.454 21:21:35 -- accel/accel.sh@20 -- # val=32 00:06:12.454 21:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # IFS=: 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # read -r var val 00:06:12.454 21:21:35 -- accel/accel.sh@20 -- # val=32 00:06:12.454 21:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # IFS=: 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # read -r var val 00:06:12.454 21:21:35 -- accel/accel.sh@20 -- # val=1 00:06:12.454 21:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # IFS=: 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # read -r var val 00:06:12.454 21:21:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.454 21:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # IFS=: 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # read -r var val 00:06:12.454 21:21:35 -- accel/accel.sh@20 -- # val=Yes 00:06:12.454 21:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # IFS=: 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # read -r var val 00:06:12.454 21:21:35 -- accel/accel.sh@20 -- # val= 00:06:12.454 21:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # IFS=: 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # read -r var val 00:06:12.454 21:21:35 -- accel/accel.sh@20 -- # val= 00:06:12.454 21:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # IFS=: 00:06:12.454 21:21:35 -- accel/accel.sh@19 -- # read -r var val 00:06:13.417 21:21:36 -- accel/accel.sh@20 -- # val= 00:06:13.417 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.417 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.417 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.417 21:21:36 -- accel/accel.sh@20 -- # val= 00:06:13.417 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.417 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.417 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.417 21:21:36 -- accel/accel.sh@20 -- # val= 00:06:13.417 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.417 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.417 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.417 21:21:36 -- accel/accel.sh@20 -- # val= 00:06:13.417 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.417 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.417 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.417 21:21:36 -- accel/accel.sh@20 -- # val= 00:06:13.417 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.417 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.417 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.417 21:21:36 -- accel/accel.sh@20 -- # val= 00:06:13.417 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.417 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.417 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.417 21:21:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.417 21:21:36 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:13.417 21:21:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.417 00:06:13.417 real 0m1.369s 00:06:13.417 user 0m1.249s 00:06:13.417 sys 0m0.134s 00:06:13.417 21:21:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.417 21:21:36 -- common/autotest_common.sh@10 -- # set +x 00:06:13.417 ************************************ 00:06:13.417 END TEST accel_copy_crc32c 00:06:13.417 ************************************ 00:06:13.677 21:21:36 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:13.677 21:21:36 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:13.677 21:21:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.677 21:21:36 -- common/autotest_common.sh@10 -- # set +x 00:06:13.677 ************************************ 00:06:13.677 START TEST accel_copy_crc32c_C2 00:06:13.677 ************************************ 00:06:13.677 21:21:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:13.677 21:21:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.677 21:21:36 -- accel/accel.sh@17 -- # local accel_module 00:06:13.677 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.677 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.677 21:21:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:13.677 21:21:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:13.677 21:21:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.677 21:21:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.677 21:21:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.677 21:21:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.677 21:21:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.677 21:21:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.677 21:21:36 -- accel/accel.sh@40 -- # local IFS=, 00:06:13.677 21:21:36 -- accel/accel.sh@41 -- # jq -r . 00:06:13.677 [2024-04-24 21:21:36.535760] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:13.677 [2024-04-24 21:21:36.535823] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2692361 ] 00:06:13.938 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.938 [2024-04-24 21:21:36.607916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.938 [2024-04-24 21:21:36.678748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.938 21:21:36 -- accel/accel.sh@20 -- # val= 00:06:13.938 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.938 21:21:36 -- accel/accel.sh@20 -- # val= 00:06:13.938 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.938 21:21:36 -- accel/accel.sh@20 -- # val=0x1 00:06:13.938 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.938 21:21:36 -- accel/accel.sh@20 -- # val= 00:06:13.938 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.938 21:21:36 -- accel/accel.sh@20 -- # val= 00:06:13.938 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.938 21:21:36 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:13.938 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.938 21:21:36 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.938 21:21:36 -- accel/accel.sh@20 -- # val=0 00:06:13.938 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.938 21:21:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.938 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.938 21:21:36 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:13.938 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.938 21:21:36 -- accel/accel.sh@20 -- # val= 00:06:13.938 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.938 21:21:36 -- accel/accel.sh@20 -- # val=software 00:06:13.938 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.938 21:21:36 -- accel/accel.sh@22 -- # accel_module=software 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.938 21:21:36 -- accel/accel.sh@20 -- # val=32 00:06:13.938 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.938 21:21:36 -- accel/accel.sh@20 -- # val=32 00:06:13.938 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.938 21:21:36 -- accel/accel.sh@20 -- # val=1 00:06:13.938 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.938 21:21:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.938 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.938 21:21:36 -- accel/accel.sh@20 -- # val=Yes 00:06:13.938 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.938 21:21:36 -- accel/accel.sh@20 -- # val= 00:06:13.938 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:13.938 21:21:36 -- accel/accel.sh@20 -- # val= 00:06:13.938 21:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # IFS=: 00:06:13.938 21:21:36 -- accel/accel.sh@19 -- # read -r var val 00:06:15.319 21:21:37 -- accel/accel.sh@20 -- # val= 00:06:15.319 21:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.319 21:21:37 -- accel/accel.sh@19 -- # IFS=: 00:06:15.319 21:21:37 -- accel/accel.sh@19 -- # read -r var val 00:06:15.319 21:21:37 -- accel/accel.sh@20 -- # val= 00:06:15.319 21:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.319 21:21:37 -- accel/accel.sh@19 -- # IFS=: 00:06:15.319 21:21:37 -- accel/accel.sh@19 -- # read -r var val 00:06:15.319 21:21:37 -- accel/accel.sh@20 -- # val= 00:06:15.319 21:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.319 21:21:37 -- accel/accel.sh@19 -- # IFS=: 00:06:15.319 21:21:37 -- accel/accel.sh@19 -- # read -r var val 00:06:15.319 21:21:37 -- accel/accel.sh@20 -- # val= 00:06:15.319 21:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.319 21:21:37 -- accel/accel.sh@19 -- # IFS=: 00:06:15.319 21:21:37 -- accel/accel.sh@19 -- # read -r var val 00:06:15.319 21:21:37 -- accel/accel.sh@20 -- # val= 00:06:15.319 21:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.319 21:21:37 -- accel/accel.sh@19 -- # IFS=: 00:06:15.319 21:21:37 -- accel/accel.sh@19 -- # read -r var val 00:06:15.319 21:21:37 -- accel/accel.sh@20 -- # val= 00:06:15.319 21:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.319 21:21:37 -- accel/accel.sh@19 -- # IFS=: 00:06:15.319 21:21:37 -- accel/accel.sh@19 -- # read -r var val 00:06:15.319 21:21:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.319 21:21:37 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:15.319 21:21:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.319 00:06:15.319 real 0m1.371s 00:06:15.319 user 0m1.255s 00:06:15.319 sys 0m0.130s 00:06:15.319 21:21:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.319 21:21:37 -- common/autotest_common.sh@10 -- # set +x 00:06:15.319 ************************************ 00:06:15.319 END TEST accel_copy_crc32c_C2 00:06:15.319 ************************************ 00:06:15.319 21:21:37 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:15.319 21:21:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:15.319 21:21:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.319 21:21:37 -- common/autotest_common.sh@10 -- # set +x 00:06:15.319 ************************************ 00:06:15.319 START TEST accel_dualcast 00:06:15.319 ************************************ 00:06:15.319 21:21:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:06:15.319 21:21:38 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.319 21:21:38 -- accel/accel.sh@17 -- # local accel_module 00:06:15.319 21:21:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:15.319 21:21:38 -- accel/accel.sh@19 -- # IFS=: 00:06:15.319 21:21:38 -- accel/accel.sh@19 -- # read -r var val 00:06:15.319 21:21:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:15.319 21:21:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.319 21:21:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.319 21:21:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.319 21:21:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.319 21:21:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.319 21:21:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.319 21:21:38 -- accel/accel.sh@40 -- # local IFS=, 00:06:15.319 21:21:38 -- accel/accel.sh@41 -- # jq -r . 00:06:15.319 [2024-04-24 21:21:38.062212] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:15.319 [2024-04-24 21:21:38.062269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2692648 ] 00:06:15.319 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.319 [2024-04-24 21:21:38.131216] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.319 [2024-04-24 21:21:38.199107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.579 21:21:38 -- accel/accel.sh@20 -- # val= 00:06:15.579 21:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # IFS=: 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # read -r var val 00:06:15.579 21:21:38 -- accel/accel.sh@20 -- # val= 00:06:15.579 21:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # IFS=: 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # read -r var val 00:06:15.579 21:21:38 -- accel/accel.sh@20 -- # val=0x1 00:06:15.579 21:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # IFS=: 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # read -r var val 00:06:15.579 21:21:38 -- accel/accel.sh@20 -- # val= 00:06:15.579 21:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # IFS=: 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # read -r var val 00:06:15.579 21:21:38 -- accel/accel.sh@20 -- # val= 00:06:15.579 21:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # IFS=: 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # read -r var val 00:06:15.579 21:21:38 -- accel/accel.sh@20 -- # val=dualcast 00:06:15.579 21:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.579 21:21:38 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # IFS=: 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # read -r var val 00:06:15.579 21:21:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.579 21:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # IFS=: 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # read -r var val 00:06:15.579 21:21:38 -- accel/accel.sh@20 -- # val= 00:06:15.579 21:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # IFS=: 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # read -r var val 00:06:15.579 21:21:38 -- accel/accel.sh@20 -- # val=software 00:06:15.579 21:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.579 21:21:38 -- accel/accel.sh@22 -- # accel_module=software 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # IFS=: 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # read -r var val 00:06:15.579 21:21:38 -- accel/accel.sh@20 -- # val=32 00:06:15.579 21:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # IFS=: 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # read -r var val 00:06:15.579 21:21:38 -- accel/accel.sh@20 -- # val=32 00:06:15.579 21:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # IFS=: 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # read -r var val 00:06:15.579 21:21:38 -- accel/accel.sh@20 -- # val=1 00:06:15.579 21:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # IFS=: 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # read -r var val 00:06:15.579 21:21:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.579 21:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # IFS=: 00:06:15.579 21:21:38 -- accel/accel.sh@19 -- # read -r var val 00:06:15.579 21:21:38 -- accel/accel.sh@20 -- # val=Yes 00:06:15.580 21:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.580 21:21:38 -- accel/accel.sh@19 -- # IFS=: 00:06:15.580 21:21:38 -- accel/accel.sh@19 -- # read -r var val 00:06:15.580 21:21:38 -- accel/accel.sh@20 -- # val= 00:06:15.580 21:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.580 21:21:38 -- accel/accel.sh@19 -- # IFS=: 00:06:15.580 21:21:38 -- accel/accel.sh@19 -- # read -r var val 00:06:15.580 21:21:38 -- accel/accel.sh@20 -- # val= 00:06:15.580 21:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.580 21:21:38 -- accel/accel.sh@19 -- # IFS=: 00:06:15.580 21:21:38 -- accel/accel.sh@19 -- # read -r var val 00:06:16.519 21:21:39 -- accel/accel.sh@20 -- # val= 00:06:16.519 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.519 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:16.519 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:16.519 21:21:39 -- accel/accel.sh@20 -- # val= 00:06:16.519 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.519 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:16.519 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:16.519 21:21:39 -- accel/accel.sh@20 -- # val= 00:06:16.519 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.519 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:16.519 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:16.519 21:21:39 -- accel/accel.sh@20 -- # val= 00:06:16.519 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.519 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:16.519 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:16.519 21:21:39 -- accel/accel.sh@20 -- # val= 00:06:16.519 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.519 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:16.519 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:16.519 21:21:39 -- accel/accel.sh@20 -- # val= 00:06:16.519 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:16.519 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:16.519 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:16.519 21:21:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.519 21:21:39 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:16.519 21:21:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.519 00:06:16.519 real 0m1.356s 00:06:16.519 user 0m1.241s 00:06:16.519 sys 0m0.128s 00:06:16.519 21:21:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:16.519 21:21:39 -- common/autotest_common.sh@10 -- # set +x 00:06:16.519 ************************************ 00:06:16.519 END TEST accel_dualcast 00:06:16.519 ************************************ 00:06:16.779 21:21:39 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:16.779 21:21:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:16.779 21:21:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.779 21:21:39 -- common/autotest_common.sh@10 -- # set +x 00:06:16.779 ************************************ 00:06:16.779 START TEST accel_compare 00:06:16.779 ************************************ 00:06:16.779 21:21:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:06:16.779 21:21:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.779 21:21:39 -- accel/accel.sh@17 -- # local accel_module 00:06:16.779 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:16.779 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:16.779 21:21:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:16.780 21:21:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:16.780 21:21:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.780 21:21:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.780 21:21:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.780 21:21:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.780 21:21:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.780 21:21:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.780 21:21:39 -- accel/accel.sh@40 -- # local IFS=, 00:06:16.780 21:21:39 -- accel/accel.sh@41 -- # jq -r . 00:06:16.780 [2024-04-24 21:21:39.597407] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:16.780 [2024-04-24 21:21:39.597476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2692943 ] 00:06:16.780 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.780 [2024-04-24 21:21:39.667755] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.040 [2024-04-24 21:21:39.737730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.040 21:21:39 -- accel/accel.sh@20 -- # val= 00:06:17.040 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:17.040 21:21:39 -- accel/accel.sh@20 -- # val= 00:06:17.040 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:17.040 21:21:39 -- accel/accel.sh@20 -- # val=0x1 00:06:17.040 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:17.040 21:21:39 -- accel/accel.sh@20 -- # val= 00:06:17.040 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:17.040 21:21:39 -- accel/accel.sh@20 -- # val= 00:06:17.040 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:17.040 21:21:39 -- accel/accel.sh@20 -- # val=compare 00:06:17.040 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.040 21:21:39 -- accel/accel.sh@23 -- # accel_opc=compare 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:17.040 21:21:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.040 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:17.040 21:21:39 -- accel/accel.sh@20 -- # val= 00:06:17.040 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:17.040 21:21:39 -- accel/accel.sh@20 -- # val=software 00:06:17.040 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.040 21:21:39 -- accel/accel.sh@22 -- # accel_module=software 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:17.040 21:21:39 -- accel/accel.sh@20 -- # val=32 00:06:17.040 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:17.040 21:21:39 -- accel/accel.sh@20 -- # val=32 00:06:17.040 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:17.040 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:17.040 21:21:39 -- accel/accel.sh@20 -- # val=1 00:06:17.040 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.041 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:17.041 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:17.041 21:21:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.041 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.041 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:17.041 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:17.041 21:21:39 -- accel/accel.sh@20 -- # val=Yes 00:06:17.041 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.041 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:17.041 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:17.041 21:21:39 -- accel/accel.sh@20 -- # val= 00:06:17.041 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.041 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:17.041 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:17.041 21:21:39 -- accel/accel.sh@20 -- # val= 00:06:17.041 21:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.041 21:21:39 -- accel/accel.sh@19 -- # IFS=: 00:06:17.041 21:21:39 -- accel/accel.sh@19 -- # read -r var val 00:06:18.422 21:21:40 -- accel/accel.sh@20 -- # val= 00:06:18.422 21:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.422 21:21:40 -- accel/accel.sh@19 -- # IFS=: 00:06:18.422 21:21:40 -- accel/accel.sh@19 -- # read -r var val 00:06:18.422 21:21:40 -- accel/accel.sh@20 -- # val= 00:06:18.422 21:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.422 21:21:40 -- accel/accel.sh@19 -- # IFS=: 00:06:18.422 21:21:40 -- accel/accel.sh@19 -- # read -r var val 00:06:18.422 21:21:40 -- accel/accel.sh@20 -- # val= 00:06:18.422 21:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.422 21:21:40 -- accel/accel.sh@19 -- # IFS=: 00:06:18.422 21:21:40 -- accel/accel.sh@19 -- # read -r var val 00:06:18.422 21:21:40 -- accel/accel.sh@20 -- # val= 00:06:18.422 21:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.422 21:21:40 -- accel/accel.sh@19 -- # IFS=: 00:06:18.422 21:21:40 -- accel/accel.sh@19 -- # read -r var val 00:06:18.422 21:21:40 -- accel/accel.sh@20 -- # val= 00:06:18.422 21:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.422 21:21:40 -- accel/accel.sh@19 -- # IFS=: 00:06:18.422 21:21:40 -- accel/accel.sh@19 -- # read -r var val 00:06:18.422 21:21:40 -- accel/accel.sh@20 -- # val= 00:06:18.422 21:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.422 21:21:40 -- accel/accel.sh@19 -- # IFS=: 00:06:18.422 21:21:40 -- accel/accel.sh@19 -- # read -r var val 00:06:18.422 21:21:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.422 21:21:40 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:18.422 21:21:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.422 00:06:18.422 real 0m1.366s 00:06:18.422 user 0m1.252s 00:06:18.422 sys 0m0.127s 00:06:18.422 21:21:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:18.422 21:21:40 -- common/autotest_common.sh@10 -- # set +x 00:06:18.422 ************************************ 00:06:18.422 END TEST accel_compare 00:06:18.422 ************************************ 00:06:18.422 21:21:40 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:18.422 21:21:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:18.422 21:21:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.422 21:21:40 -- common/autotest_common.sh@10 -- # set +x 00:06:18.422 ************************************ 00:06:18.422 START TEST accel_xor 00:06:18.422 ************************************ 00:06:18.422 21:21:41 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:06:18.422 21:21:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.422 21:21:41 -- accel/accel.sh@17 -- # local accel_module 00:06:18.422 21:21:41 -- accel/accel.sh@19 -- # IFS=: 00:06:18.422 21:21:41 -- accel/accel.sh@19 -- # read -r var val 00:06:18.422 21:21:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:18.422 21:21:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:18.422 21:21:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.422 21:21:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.422 21:21:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.422 21:21:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.422 21:21:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.422 21:21:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.422 21:21:41 -- accel/accel.sh@40 -- # local IFS=, 00:06:18.422 21:21:41 -- accel/accel.sh@41 -- # jq -r . 00:06:18.422 [2024-04-24 21:21:41.136754] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:18.422 [2024-04-24 21:21:41.136816] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2693236 ] 00:06:18.422 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.422 [2024-04-24 21:21:41.205216] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.423 [2024-04-24 21:21:41.272810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.682 21:21:41 -- accel/accel.sh@20 -- # val= 00:06:18.682 21:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.682 21:21:41 -- accel/accel.sh@19 -- # IFS=: 00:06:18.682 21:21:41 -- accel/accel.sh@19 -- # read -r var val 00:06:18.682 21:21:41 -- accel/accel.sh@20 -- # val= 00:06:18.682 21:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.682 21:21:41 -- accel/accel.sh@19 -- # IFS=: 00:06:18.682 21:21:41 -- accel/accel.sh@19 -- # read -r var val 00:06:18.682 21:21:41 -- accel/accel.sh@20 -- # val=0x1 00:06:18.682 21:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # IFS=: 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # read -r var val 00:06:18.683 21:21:41 -- accel/accel.sh@20 -- # val= 00:06:18.683 21:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # IFS=: 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # read -r var val 00:06:18.683 21:21:41 -- accel/accel.sh@20 -- # val= 00:06:18.683 21:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # IFS=: 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # read -r var val 00:06:18.683 21:21:41 -- accel/accel.sh@20 -- # val=xor 00:06:18.683 21:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.683 21:21:41 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # IFS=: 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # read -r var val 00:06:18.683 21:21:41 -- accel/accel.sh@20 -- # val=2 00:06:18.683 21:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # IFS=: 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # read -r var val 00:06:18.683 21:21:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.683 21:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # IFS=: 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # read -r var val 00:06:18.683 21:21:41 -- accel/accel.sh@20 -- # val= 00:06:18.683 21:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # IFS=: 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # read -r var val 00:06:18.683 21:21:41 -- accel/accel.sh@20 -- # val=software 00:06:18.683 21:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.683 21:21:41 -- accel/accel.sh@22 -- # accel_module=software 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # IFS=: 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # read -r var val 00:06:18.683 21:21:41 -- accel/accel.sh@20 -- # val=32 00:06:18.683 21:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # IFS=: 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # read -r var val 00:06:18.683 21:21:41 -- accel/accel.sh@20 -- # val=32 00:06:18.683 21:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # IFS=: 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # read -r var val 00:06:18.683 21:21:41 -- accel/accel.sh@20 -- # val=1 00:06:18.683 21:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # IFS=: 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # read -r var val 00:06:18.683 21:21:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.683 21:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # IFS=: 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # read -r var val 00:06:18.683 21:21:41 -- accel/accel.sh@20 -- # val=Yes 00:06:18.683 21:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # IFS=: 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # read -r var val 00:06:18.683 21:21:41 -- accel/accel.sh@20 -- # val= 00:06:18.683 21:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # IFS=: 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # read -r var val 00:06:18.683 21:21:41 -- accel/accel.sh@20 -- # val= 00:06:18.683 21:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # IFS=: 00:06:18.683 21:21:41 -- accel/accel.sh@19 -- # read -r var val 00:06:19.631 21:21:42 -- accel/accel.sh@20 -- # val= 00:06:19.632 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.632 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:19.632 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:19.632 21:21:42 -- accel/accel.sh@20 -- # val= 00:06:19.632 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.632 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:19.632 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:19.632 21:21:42 -- accel/accel.sh@20 -- # val= 00:06:19.632 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.632 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:19.632 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:19.632 21:21:42 -- accel/accel.sh@20 -- # val= 00:06:19.632 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.632 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:19.632 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:19.632 21:21:42 -- accel/accel.sh@20 -- # val= 00:06:19.632 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.632 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:19.632 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:19.632 21:21:42 -- accel/accel.sh@20 -- # val= 00:06:19.632 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.632 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:19.632 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:19.632 21:21:42 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.632 21:21:42 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:19.632 21:21:42 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.632 00:06:19.632 real 0m1.360s 00:06:19.632 user 0m1.247s 00:06:19.632 sys 0m0.126s 00:06:19.632 21:21:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.632 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:06:19.632 ************************************ 00:06:19.632 END TEST accel_xor 00:06:19.632 ************************************ 00:06:19.632 21:21:42 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:19.632 21:21:42 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:19.632 21:21:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.632 21:21:42 -- common/autotest_common.sh@10 -- # set +x 00:06:19.892 ************************************ 00:06:19.892 START TEST accel_xor 00:06:19.892 ************************************ 00:06:19.892 21:21:42 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:06:19.892 21:21:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.892 21:21:42 -- accel/accel.sh@17 -- # local accel_module 00:06:19.892 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:19.892 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:19.892 21:21:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:19.892 21:21:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:19.892 21:21:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.892 21:21:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.892 21:21:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.892 21:21:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.892 21:21:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.892 21:21:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.892 21:21:42 -- accel/accel.sh@40 -- # local IFS=, 00:06:19.892 21:21:42 -- accel/accel.sh@41 -- # jq -r . 00:06:19.892 [2024-04-24 21:21:42.681344] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:19.892 [2024-04-24 21:21:42.681401] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2693530 ] 00:06:19.892 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.892 [2024-04-24 21:21:42.751758] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.152 [2024-04-24 21:21:42.820810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.152 21:21:42 -- accel/accel.sh@20 -- # val= 00:06:20.152 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:20.152 21:21:42 -- accel/accel.sh@20 -- # val= 00:06:20.152 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:20.152 21:21:42 -- accel/accel.sh@20 -- # val=0x1 00:06:20.152 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:20.152 21:21:42 -- accel/accel.sh@20 -- # val= 00:06:20.152 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:20.152 21:21:42 -- accel/accel.sh@20 -- # val= 00:06:20.152 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:20.152 21:21:42 -- accel/accel.sh@20 -- # val=xor 00:06:20.152 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.152 21:21:42 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:20.152 21:21:42 -- accel/accel.sh@20 -- # val=3 00:06:20.152 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:20.152 21:21:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.152 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:20.152 21:21:42 -- accel/accel.sh@20 -- # val= 00:06:20.152 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:20.152 21:21:42 -- accel/accel.sh@20 -- # val=software 00:06:20.152 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.152 21:21:42 -- accel/accel.sh@22 -- # accel_module=software 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:20.152 21:21:42 -- accel/accel.sh@20 -- # val=32 00:06:20.152 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:20.152 21:21:42 -- accel/accel.sh@20 -- # val=32 00:06:20.152 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:20.152 21:21:42 -- accel/accel.sh@20 -- # val=1 00:06:20.152 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:20.152 21:21:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.152 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:20.152 21:21:42 -- accel/accel.sh@20 -- # val=Yes 00:06:20.152 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:20.152 21:21:42 -- accel/accel.sh@20 -- # val= 00:06:20.152 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:20.152 21:21:42 -- accel/accel.sh@20 -- # val= 00:06:20.152 21:21:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # IFS=: 00:06:20.152 21:21:42 -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 21:21:44 -- accel/accel.sh@20 -- # val= 00:06:21.532 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.532 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.532 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 21:21:44 -- accel/accel.sh@20 -- # val= 00:06:21.532 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.532 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.532 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 21:21:44 -- accel/accel.sh@20 -- # val= 00:06:21.532 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.532 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.532 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 21:21:44 -- accel/accel.sh@20 -- # val= 00:06:21.532 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.532 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.532 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 21:21:44 -- accel/accel.sh@20 -- # val= 00:06:21.532 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.532 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.532 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 21:21:44 -- accel/accel.sh@20 -- # val= 00:06:21.532 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.532 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.532 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 21:21:44 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.532 21:21:44 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:21.532 21:21:44 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.532 00:06:21.532 real 0m1.366s 00:06:21.532 user 0m1.251s 00:06:21.532 sys 0m0.127s 00:06:21.532 21:21:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:21.532 21:21:44 -- common/autotest_common.sh@10 -- # set +x 00:06:21.532 ************************************ 00:06:21.532 END TEST accel_xor 00:06:21.532 ************************************ 00:06:21.532 21:21:44 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:21.532 21:21:44 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:21.532 21:21:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.532 21:21:44 -- common/autotest_common.sh@10 -- # set +x 00:06:21.532 ************************************ 00:06:21.532 START TEST accel_dif_verify 00:06:21.532 ************************************ 00:06:21.532 21:21:44 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:06:21.532 21:21:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.532 21:21:44 -- accel/accel.sh@17 -- # local accel_module 00:06:21.532 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.532 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 21:21:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:21.532 21:21:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:21.532 21:21:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.532 21:21:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.532 21:21:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.532 21:21:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.532 21:21:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.532 21:21:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.532 21:21:44 -- accel/accel.sh@40 -- # local IFS=, 00:06:21.532 21:21:44 -- accel/accel.sh@41 -- # jq -r . 00:06:21.532 [2024-04-24 21:21:44.245322] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:21.532 [2024-04-24 21:21:44.245393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2693823 ] 00:06:21.532 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.532 [2024-04-24 21:21:44.317910] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.532 [2024-04-24 21:21:44.390277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.792 21:21:44 -- accel/accel.sh@20 -- # val= 00:06:21.792 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.792 21:21:44 -- accel/accel.sh@20 -- # val= 00:06:21.792 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.792 21:21:44 -- accel/accel.sh@20 -- # val=0x1 00:06:21.792 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.792 21:21:44 -- accel/accel.sh@20 -- # val= 00:06:21.792 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.792 21:21:44 -- accel/accel.sh@20 -- # val= 00:06:21.792 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.792 21:21:44 -- accel/accel.sh@20 -- # val=dif_verify 00:06:21.792 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.792 21:21:44 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.792 21:21:44 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.792 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.792 21:21:44 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.792 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.792 21:21:44 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:21.792 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.792 21:21:44 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:21.792 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.792 21:21:44 -- accel/accel.sh@20 -- # val= 00:06:21.792 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.792 21:21:44 -- accel/accel.sh@20 -- # val=software 00:06:21.792 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.792 21:21:44 -- accel/accel.sh@22 -- # accel_module=software 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.792 21:21:44 -- accel/accel.sh@20 -- # val=32 00:06:21.792 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.792 21:21:44 -- accel/accel.sh@20 -- # val=32 00:06:21.792 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.792 21:21:44 -- accel/accel.sh@20 -- # val=1 00:06:21.792 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.792 21:21:44 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.792 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.792 21:21:44 -- accel/accel.sh@20 -- # val=No 00:06:21.792 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.792 21:21:44 -- accel/accel.sh@20 -- # val= 00:06:21.792 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.792 21:21:44 -- accel/accel.sh@20 -- # val= 00:06:21.792 21:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # IFS=: 00:06:21.792 21:21:44 -- accel/accel.sh@19 -- # read -r var val 00:06:22.731 21:21:45 -- accel/accel.sh@20 -- # val= 00:06:22.731 21:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.731 21:21:45 -- accel/accel.sh@19 -- # IFS=: 00:06:22.731 21:21:45 -- accel/accel.sh@19 -- # read -r var val 00:06:22.731 21:21:45 -- accel/accel.sh@20 -- # val= 00:06:22.731 21:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.731 21:21:45 -- accel/accel.sh@19 -- # IFS=: 00:06:22.731 21:21:45 -- accel/accel.sh@19 -- # read -r var val 00:06:22.731 21:21:45 -- accel/accel.sh@20 -- # val= 00:06:22.731 21:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.731 21:21:45 -- accel/accel.sh@19 -- # IFS=: 00:06:22.731 21:21:45 -- accel/accel.sh@19 -- # read -r var val 00:06:22.731 21:21:45 -- accel/accel.sh@20 -- # val= 00:06:22.731 21:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.731 21:21:45 -- accel/accel.sh@19 -- # IFS=: 00:06:22.731 21:21:45 -- accel/accel.sh@19 -- # read -r var val 00:06:22.731 21:21:45 -- accel/accel.sh@20 -- # val= 00:06:22.731 21:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.731 21:21:45 -- accel/accel.sh@19 -- # IFS=: 00:06:22.731 21:21:45 -- accel/accel.sh@19 -- # read -r var val 00:06:22.731 21:21:45 -- accel/accel.sh@20 -- # val= 00:06:22.731 21:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.731 21:21:45 -- accel/accel.sh@19 -- # IFS=: 00:06:22.731 21:21:45 -- accel/accel.sh@19 -- # read -r var val 00:06:22.731 21:21:45 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.731 21:21:45 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:22.731 21:21:45 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.731 00:06:22.731 real 0m1.372s 00:06:22.731 user 0m1.256s 00:06:22.731 sys 0m0.130s 00:06:22.731 21:21:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:22.731 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:06:22.731 ************************************ 00:06:22.731 END TEST accel_dif_verify 00:06:22.731 ************************************ 00:06:22.990 21:21:45 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:22.990 21:21:45 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:22.990 21:21:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.990 21:21:45 -- common/autotest_common.sh@10 -- # set +x 00:06:22.990 ************************************ 00:06:22.990 START TEST accel_dif_generate 00:06:22.990 ************************************ 00:06:22.990 21:21:45 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:06:22.990 21:21:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.990 21:21:45 -- accel/accel.sh@17 -- # local accel_module 00:06:22.990 21:21:45 -- accel/accel.sh@19 -- # IFS=: 00:06:22.990 21:21:45 -- accel/accel.sh@19 -- # read -r var val 00:06:22.990 21:21:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:22.990 21:21:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:22.990 21:21:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.990 21:21:45 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.990 21:21:45 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.990 21:21:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.990 21:21:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.990 21:21:45 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.990 21:21:45 -- accel/accel.sh@40 -- # local IFS=, 00:06:22.990 21:21:45 -- accel/accel.sh@41 -- # jq -r . 00:06:22.990 [2024-04-24 21:21:45.812656] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:22.990 [2024-04-24 21:21:45.812723] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694114 ] 00:06:22.990 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.250 [2024-04-24 21:21:45.885259] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.250 [2024-04-24 21:21:45.957369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.250 21:21:45 -- accel/accel.sh@20 -- # val= 00:06:23.250 21:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.250 21:21:45 -- accel/accel.sh@19 -- # IFS=: 00:06:23.250 21:21:45 -- accel/accel.sh@19 -- # read -r var val 00:06:23.250 21:21:45 -- accel/accel.sh@20 -- # val= 00:06:23.250 21:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.250 21:21:45 -- accel/accel.sh@19 -- # IFS=: 00:06:23.250 21:21:45 -- accel/accel.sh@19 -- # read -r var val 00:06:23.250 21:21:46 -- accel/accel.sh@20 -- # val=0x1 00:06:23.250 21:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # IFS=: 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # read -r var val 00:06:23.250 21:21:46 -- accel/accel.sh@20 -- # val= 00:06:23.250 21:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # IFS=: 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # read -r var val 00:06:23.250 21:21:46 -- accel/accel.sh@20 -- # val= 00:06:23.250 21:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # IFS=: 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # read -r var val 00:06:23.250 21:21:46 -- accel/accel.sh@20 -- # val=dif_generate 00:06:23.250 21:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.250 21:21:46 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # IFS=: 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # read -r var val 00:06:23.250 21:21:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.250 21:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # IFS=: 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # read -r var val 00:06:23.250 21:21:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.250 21:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # IFS=: 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # read -r var val 00:06:23.250 21:21:46 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:23.250 21:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # IFS=: 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # read -r var val 00:06:23.250 21:21:46 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:23.250 21:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # IFS=: 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # read -r var val 00:06:23.250 21:21:46 -- accel/accel.sh@20 -- # val= 00:06:23.250 21:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # IFS=: 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # read -r var val 00:06:23.250 21:21:46 -- accel/accel.sh@20 -- # val=software 00:06:23.250 21:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.250 21:21:46 -- accel/accel.sh@22 -- # accel_module=software 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # IFS=: 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # read -r var val 00:06:23.250 21:21:46 -- accel/accel.sh@20 -- # val=32 00:06:23.250 21:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # IFS=: 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # read -r var val 00:06:23.250 21:21:46 -- accel/accel.sh@20 -- # val=32 00:06:23.250 21:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # IFS=: 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # read -r var val 00:06:23.250 21:21:46 -- accel/accel.sh@20 -- # val=1 00:06:23.250 21:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # IFS=: 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # read -r var val 00:06:23.250 21:21:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.250 21:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # IFS=: 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # read -r var val 00:06:23.250 21:21:46 -- accel/accel.sh@20 -- # val=No 00:06:23.250 21:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # IFS=: 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # read -r var val 00:06:23.250 21:21:46 -- accel/accel.sh@20 -- # val= 00:06:23.250 21:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # IFS=: 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # read -r var val 00:06:23.250 21:21:46 -- accel/accel.sh@20 -- # val= 00:06:23.250 21:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # IFS=: 00:06:23.250 21:21:46 -- accel/accel.sh@19 -- # read -r var val 00:06:24.632 21:21:47 -- accel/accel.sh@20 -- # val= 00:06:24.632 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.632 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.632 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.632 21:21:47 -- accel/accel.sh@20 -- # val= 00:06:24.632 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.632 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.632 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.632 21:21:47 -- accel/accel.sh@20 -- # val= 00:06:24.632 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.632 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.632 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.632 21:21:47 -- accel/accel.sh@20 -- # val= 00:06:24.632 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.632 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.632 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.632 21:21:47 -- accel/accel.sh@20 -- # val= 00:06:24.632 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.632 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.632 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.632 21:21:47 -- accel/accel.sh@20 -- # val= 00:06:24.632 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.632 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.632 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.632 21:21:47 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.632 21:21:47 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:24.632 21:21:47 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.632 00:06:24.632 real 0m1.370s 00:06:24.632 user 0m1.249s 00:06:24.632 sys 0m0.135s 00:06:24.632 21:21:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:24.632 21:21:47 -- common/autotest_common.sh@10 -- # set +x 00:06:24.632 ************************************ 00:06:24.632 END TEST accel_dif_generate 00:06:24.632 ************************************ 00:06:24.632 21:21:47 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:24.632 21:21:47 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:24.632 21:21:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.632 21:21:47 -- common/autotest_common.sh@10 -- # set +x 00:06:24.632 ************************************ 00:06:24.632 START TEST accel_dif_generate_copy 00:06:24.632 ************************************ 00:06:24.632 21:21:47 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:06:24.632 21:21:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.632 21:21:47 -- accel/accel.sh@17 -- # local accel_module 00:06:24.632 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.632 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.632 21:21:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:24.632 21:21:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:24.632 21:21:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.632 21:21:47 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.632 21:21:47 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.632 21:21:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.632 21:21:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.632 21:21:47 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.632 21:21:47 -- accel/accel.sh@40 -- # local IFS=, 00:06:24.632 21:21:47 -- accel/accel.sh@41 -- # jq -r . 00:06:24.632 [2024-04-24 21:21:47.346495] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:24.632 [2024-04-24 21:21:47.346573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694401 ] 00:06:24.632 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.632 [2024-04-24 21:21:47.417665] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.632 [2024-04-24 21:21:47.486475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.892 21:21:47 -- accel/accel.sh@20 -- # val= 00:06:24.892 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.892 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.892 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.892 21:21:47 -- accel/accel.sh@20 -- # val= 00:06:24.892 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.892 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.892 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.892 21:21:47 -- accel/accel.sh@20 -- # val=0x1 00:06:24.892 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.892 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.892 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.892 21:21:47 -- accel/accel.sh@20 -- # val= 00:06:24.892 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.892 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.892 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.892 21:21:47 -- accel/accel.sh@20 -- # val= 00:06:24.892 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.892 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.893 21:21:47 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:24.893 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.893 21:21:47 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.893 21:21:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.893 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.893 21:21:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.893 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.893 21:21:47 -- accel/accel.sh@20 -- # val= 00:06:24.893 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.893 21:21:47 -- accel/accel.sh@20 -- # val=software 00:06:24.893 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.893 21:21:47 -- accel/accel.sh@22 -- # accel_module=software 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.893 21:21:47 -- accel/accel.sh@20 -- # val=32 00:06:24.893 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.893 21:21:47 -- accel/accel.sh@20 -- # val=32 00:06:24.893 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.893 21:21:47 -- accel/accel.sh@20 -- # val=1 00:06:24.893 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.893 21:21:47 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.893 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.893 21:21:47 -- accel/accel.sh@20 -- # val=No 00:06:24.893 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.893 21:21:47 -- accel/accel.sh@20 -- # val= 00:06:24.893 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.893 21:21:47 -- accel/accel.sh@20 -- # val= 00:06:24.893 21:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # IFS=: 00:06:24.893 21:21:47 -- accel/accel.sh@19 -- # read -r var val 00:06:25.830 21:21:48 -- accel/accel.sh@20 -- # val= 00:06:25.830 21:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.830 21:21:48 -- accel/accel.sh@19 -- # IFS=: 00:06:25.830 21:21:48 -- accel/accel.sh@19 -- # read -r var val 00:06:25.830 21:21:48 -- accel/accel.sh@20 -- # val= 00:06:25.830 21:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.830 21:21:48 -- accel/accel.sh@19 -- # IFS=: 00:06:25.830 21:21:48 -- accel/accel.sh@19 -- # read -r var val 00:06:25.830 21:21:48 -- accel/accel.sh@20 -- # val= 00:06:25.830 21:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.830 21:21:48 -- accel/accel.sh@19 -- # IFS=: 00:06:25.830 21:21:48 -- accel/accel.sh@19 -- # read -r var val 00:06:25.830 21:21:48 -- accel/accel.sh@20 -- # val= 00:06:25.830 21:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.830 21:21:48 -- accel/accel.sh@19 -- # IFS=: 00:06:25.830 21:21:48 -- accel/accel.sh@19 -- # read -r var val 00:06:25.831 21:21:48 -- accel/accel.sh@20 -- # val= 00:06:25.831 21:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.831 21:21:48 -- accel/accel.sh@19 -- # IFS=: 00:06:25.831 21:21:48 -- accel/accel.sh@19 -- # read -r var val 00:06:25.831 21:21:48 -- accel/accel.sh@20 -- # val= 00:06:25.831 21:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.831 21:21:48 -- accel/accel.sh@19 -- # IFS=: 00:06:25.831 21:21:48 -- accel/accel.sh@19 -- # read -r var val 00:06:25.831 21:21:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.831 21:21:48 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:25.831 21:21:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.831 00:06:25.831 real 0m1.367s 00:06:25.831 user 0m1.247s 00:06:25.831 sys 0m0.132s 00:06:25.831 21:21:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.831 21:21:48 -- common/autotest_common.sh@10 -- # set +x 00:06:25.831 ************************************ 00:06:25.831 END TEST accel_dif_generate_copy 00:06:25.831 ************************************ 00:06:26.090 21:21:48 -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:26.090 21:21:48 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:26.090 21:21:48 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:26.090 21:21:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.090 21:21:48 -- common/autotest_common.sh@10 -- # set +x 00:06:26.090 ************************************ 00:06:26.090 START TEST accel_comp 00:06:26.090 ************************************ 00:06:26.090 21:21:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:26.090 21:21:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.090 21:21:48 -- accel/accel.sh@17 -- # local accel_module 00:06:26.090 21:21:48 -- accel/accel.sh@19 -- # IFS=: 00:06:26.090 21:21:48 -- accel/accel.sh@19 -- # read -r var val 00:06:26.090 21:21:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:26.090 21:21:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:26.090 21:21:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.090 21:21:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.090 21:21:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.090 21:21:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.090 21:21:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.090 21:21:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.090 21:21:48 -- accel/accel.sh@40 -- # local IFS=, 00:06:26.090 21:21:48 -- accel/accel.sh@41 -- # jq -r . 00:06:26.090 [2024-04-24 21:21:48.908442] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:26.090 [2024-04-24 21:21:48.908515] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694702 ] 00:06:26.090 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.090 [2024-04-24 21:21:48.978835] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.368 [2024-04-24 21:21:49.049586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.369 21:21:49 -- accel/accel.sh@20 -- # val= 00:06:26.369 21:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # IFS=: 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # read -r var val 00:06:26.369 21:21:49 -- accel/accel.sh@20 -- # val= 00:06:26.369 21:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # IFS=: 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # read -r var val 00:06:26.369 21:21:49 -- accel/accel.sh@20 -- # val= 00:06:26.369 21:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # IFS=: 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # read -r var val 00:06:26.369 21:21:49 -- accel/accel.sh@20 -- # val=0x1 00:06:26.369 21:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # IFS=: 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # read -r var val 00:06:26.369 21:21:49 -- accel/accel.sh@20 -- # val= 00:06:26.369 21:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # IFS=: 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # read -r var val 00:06:26.369 21:21:49 -- accel/accel.sh@20 -- # val= 00:06:26.369 21:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # IFS=: 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # read -r var val 00:06:26.369 21:21:49 -- accel/accel.sh@20 -- # val=compress 00:06:26.369 21:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.369 21:21:49 -- accel/accel.sh@23 -- # accel_opc=compress 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # IFS=: 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # read -r var val 00:06:26.369 21:21:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.369 21:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # IFS=: 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # read -r var val 00:06:26.369 21:21:49 -- accel/accel.sh@20 -- # val= 00:06:26.369 21:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # IFS=: 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # read -r var val 00:06:26.369 21:21:49 -- accel/accel.sh@20 -- # val=software 00:06:26.369 21:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.369 21:21:49 -- accel/accel.sh@22 -- # accel_module=software 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # IFS=: 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # read -r var val 00:06:26.369 21:21:49 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:26.369 21:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # IFS=: 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # read -r var val 00:06:26.369 21:21:49 -- accel/accel.sh@20 -- # val=32 00:06:26.369 21:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # IFS=: 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # read -r var val 00:06:26.369 21:21:49 -- accel/accel.sh@20 -- # val=32 00:06:26.369 21:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # IFS=: 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # read -r var val 00:06:26.369 21:21:49 -- accel/accel.sh@20 -- # val=1 00:06:26.369 21:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # IFS=: 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # read -r var val 00:06:26.369 21:21:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.369 21:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # IFS=: 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # read -r var val 00:06:26.369 21:21:49 -- accel/accel.sh@20 -- # val=No 00:06:26.369 21:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # IFS=: 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # read -r var val 00:06:26.369 21:21:49 -- accel/accel.sh@20 -- # val= 00:06:26.369 21:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # IFS=: 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # read -r var val 00:06:26.369 21:21:49 -- accel/accel.sh@20 -- # val= 00:06:26.369 21:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # IFS=: 00:06:26.369 21:21:49 -- accel/accel.sh@19 -- # read -r var val 00:06:27.746 21:21:50 -- accel/accel.sh@20 -- # val= 00:06:27.746 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.746 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:27.746 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:27.746 21:21:50 -- accel/accel.sh@20 -- # val= 00:06:27.746 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.746 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:27.746 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:27.746 21:21:50 -- accel/accel.sh@20 -- # val= 00:06:27.746 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.746 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:27.746 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:27.746 21:21:50 -- accel/accel.sh@20 -- # val= 00:06:27.746 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.746 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:27.746 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:27.746 21:21:50 -- accel/accel.sh@20 -- # val= 00:06:27.746 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.746 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:27.746 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:27.746 21:21:50 -- accel/accel.sh@20 -- # val= 00:06:27.746 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.746 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:27.746 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:27.746 21:21:50 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.746 21:21:50 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:27.746 21:21:50 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.746 00:06:27.746 real 0m1.372s 00:06:27.746 user 0m1.260s 00:06:27.746 sys 0m0.127s 00:06:27.746 21:21:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:27.746 21:21:50 -- common/autotest_common.sh@10 -- # set +x 00:06:27.746 ************************************ 00:06:27.746 END TEST accel_comp 00:06:27.746 ************************************ 00:06:27.746 21:21:50 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:27.746 21:21:50 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:27.746 21:21:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.746 21:21:50 -- common/autotest_common.sh@10 -- # set +x 00:06:27.746 ************************************ 00:06:27.746 START TEST accel_decomp 00:06:27.746 ************************************ 00:06:27.746 21:21:50 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:27.746 21:21:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.746 21:21:50 -- accel/accel.sh@17 -- # local accel_module 00:06:27.746 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:27.746 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:27.746 21:21:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:27.746 21:21:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:27.746 21:21:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.746 21:21:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.746 21:21:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.746 21:21:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.746 21:21:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.746 21:21:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.746 21:21:50 -- accel/accel.sh@40 -- # local IFS=, 00:06:27.746 21:21:50 -- accel/accel.sh@41 -- # jq -r . 00:06:27.746 [2024-04-24 21:21:50.469534] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:27.746 [2024-04-24 21:21:50.469608] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694989 ] 00:06:27.746 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.746 [2024-04-24 21:21:50.542301] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.746 [2024-04-24 21:21:50.610976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.005 21:21:50 -- accel/accel.sh@20 -- # val= 00:06:28.005 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.005 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:28.005 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:28.005 21:21:50 -- accel/accel.sh@20 -- # val= 00:06:28.005 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.005 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:28.005 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:28.005 21:21:50 -- accel/accel.sh@20 -- # val= 00:06:28.005 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.005 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:28.005 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:28.005 21:21:50 -- accel/accel.sh@20 -- # val=0x1 00:06:28.005 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.005 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:28.005 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:28.005 21:21:50 -- accel/accel.sh@20 -- # val= 00:06:28.005 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.005 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:28.005 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:28.005 21:21:50 -- accel/accel.sh@20 -- # val= 00:06:28.005 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.005 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:28.005 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:28.005 21:21:50 -- accel/accel.sh@20 -- # val=decompress 00:06:28.006 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 21:21:50 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 21:21:50 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.006 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 21:21:50 -- accel/accel.sh@20 -- # val= 00:06:28.006 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 21:21:50 -- accel/accel.sh@20 -- # val=software 00:06:28.006 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 21:21:50 -- accel/accel.sh@22 -- # accel_module=software 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 21:21:50 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:28.006 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 21:21:50 -- accel/accel.sh@20 -- # val=32 00:06:28.006 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 21:21:50 -- accel/accel.sh@20 -- # val=32 00:06:28.006 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 21:21:50 -- accel/accel.sh@20 -- # val=1 00:06:28.006 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 21:21:50 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.006 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 21:21:50 -- accel/accel.sh@20 -- # val=Yes 00:06:28.006 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 21:21:50 -- accel/accel.sh@20 -- # val= 00:06:28.006 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:28.006 21:21:50 -- accel/accel.sh@20 -- # val= 00:06:28.006 21:21:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # IFS=: 00:06:28.006 21:21:50 -- accel/accel.sh@19 -- # read -r var val 00:06:28.944 21:21:51 -- accel/accel.sh@20 -- # val= 00:06:28.944 21:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.944 21:21:51 -- accel/accel.sh@19 -- # IFS=: 00:06:28.944 21:21:51 -- accel/accel.sh@19 -- # read -r var val 00:06:28.944 21:21:51 -- accel/accel.sh@20 -- # val= 00:06:28.944 21:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.944 21:21:51 -- accel/accel.sh@19 -- # IFS=: 00:06:28.944 21:21:51 -- accel/accel.sh@19 -- # read -r var val 00:06:28.944 21:21:51 -- accel/accel.sh@20 -- # val= 00:06:28.944 21:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.944 21:21:51 -- accel/accel.sh@19 -- # IFS=: 00:06:28.944 21:21:51 -- accel/accel.sh@19 -- # read -r var val 00:06:28.944 21:21:51 -- accel/accel.sh@20 -- # val= 00:06:28.944 21:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.944 21:21:51 -- accel/accel.sh@19 -- # IFS=: 00:06:28.944 21:21:51 -- accel/accel.sh@19 -- # read -r var val 00:06:28.944 21:21:51 -- accel/accel.sh@20 -- # val= 00:06:28.944 21:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.944 21:21:51 -- accel/accel.sh@19 -- # IFS=: 00:06:28.944 21:21:51 -- accel/accel.sh@19 -- # read -r var val 00:06:28.944 21:21:51 -- accel/accel.sh@20 -- # val= 00:06:28.944 21:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.944 21:21:51 -- accel/accel.sh@19 -- # IFS=: 00:06:28.944 21:21:51 -- accel/accel.sh@19 -- # read -r var val 00:06:28.944 21:21:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.944 21:21:51 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:28.944 21:21:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.944 00:06:28.944 real 0m1.369s 00:06:28.944 user 0m1.250s 00:06:28.944 sys 0m0.134s 00:06:28.944 21:21:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:28.944 21:21:51 -- common/autotest_common.sh@10 -- # set +x 00:06:28.944 ************************************ 00:06:28.944 END TEST accel_decomp 00:06:28.944 ************************************ 00:06:29.203 21:21:51 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:29.203 21:21:51 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:29.203 21:21:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.203 21:21:51 -- common/autotest_common.sh@10 -- # set +x 00:06:29.203 ************************************ 00:06:29.203 START TEST accel_decmop_full 00:06:29.203 ************************************ 00:06:29.203 21:21:52 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:29.203 21:21:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.203 21:21:52 -- accel/accel.sh@17 -- # local accel_module 00:06:29.203 21:21:52 -- accel/accel.sh@19 -- # IFS=: 00:06:29.203 21:21:52 -- accel/accel.sh@19 -- # read -r var val 00:06:29.203 21:21:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:29.203 21:21:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:29.203 21:21:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.203 21:21:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.203 21:21:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.203 21:21:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.203 21:21:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.203 21:21:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.203 21:21:52 -- accel/accel.sh@40 -- # local IFS=, 00:06:29.203 21:21:52 -- accel/accel.sh@41 -- # jq -r . 00:06:29.203 [2024-04-24 21:21:52.057850] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:29.203 [2024-04-24 21:21:52.057932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2695284 ] 00:06:29.464 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.464 [2024-04-24 21:21:52.130988] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.464 [2024-04-24 21:21:52.200901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.464 21:21:52 -- accel/accel.sh@20 -- # val= 00:06:29.464 21:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # IFS=: 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # read -r var val 00:06:29.464 21:21:52 -- accel/accel.sh@20 -- # val= 00:06:29.464 21:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # IFS=: 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # read -r var val 00:06:29.464 21:21:52 -- accel/accel.sh@20 -- # val= 00:06:29.464 21:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # IFS=: 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # read -r var val 00:06:29.464 21:21:52 -- accel/accel.sh@20 -- # val=0x1 00:06:29.464 21:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # IFS=: 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # read -r var val 00:06:29.464 21:21:52 -- accel/accel.sh@20 -- # val= 00:06:29.464 21:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # IFS=: 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # read -r var val 00:06:29.464 21:21:52 -- accel/accel.sh@20 -- # val= 00:06:29.464 21:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # IFS=: 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # read -r var val 00:06:29.464 21:21:52 -- accel/accel.sh@20 -- # val=decompress 00:06:29.464 21:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.464 21:21:52 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # IFS=: 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # read -r var val 00:06:29.464 21:21:52 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:29.464 21:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # IFS=: 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # read -r var val 00:06:29.464 21:21:52 -- accel/accel.sh@20 -- # val= 00:06:29.464 21:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # IFS=: 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # read -r var val 00:06:29.464 21:21:52 -- accel/accel.sh@20 -- # val=software 00:06:29.464 21:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.464 21:21:52 -- accel/accel.sh@22 -- # accel_module=software 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # IFS=: 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # read -r var val 00:06:29.464 21:21:52 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.464 21:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # IFS=: 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # read -r var val 00:06:29.464 21:21:52 -- accel/accel.sh@20 -- # val=32 00:06:29.464 21:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # IFS=: 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # read -r var val 00:06:29.464 21:21:52 -- accel/accel.sh@20 -- # val=32 00:06:29.464 21:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # IFS=: 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # read -r var val 00:06:29.464 21:21:52 -- accel/accel.sh@20 -- # val=1 00:06:29.464 21:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # IFS=: 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # read -r var val 00:06:29.464 21:21:52 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.464 21:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # IFS=: 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # read -r var val 00:06:29.464 21:21:52 -- accel/accel.sh@20 -- # val=Yes 00:06:29.464 21:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # IFS=: 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # read -r var val 00:06:29.464 21:21:52 -- accel/accel.sh@20 -- # val= 00:06:29.464 21:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # IFS=: 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # read -r var val 00:06:29.464 21:21:52 -- accel/accel.sh@20 -- # val= 00:06:29.464 21:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # IFS=: 00:06:29.464 21:21:52 -- accel/accel.sh@19 -- # read -r var val 00:06:30.846 21:21:53 -- accel/accel.sh@20 -- # val= 00:06:30.846 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.846 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:30.846 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:30.846 21:21:53 -- accel/accel.sh@20 -- # val= 00:06:30.846 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.846 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:30.846 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:30.846 21:21:53 -- accel/accel.sh@20 -- # val= 00:06:30.846 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.846 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:30.846 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:30.846 21:21:53 -- accel/accel.sh@20 -- # val= 00:06:30.846 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.846 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:30.846 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:30.846 21:21:53 -- accel/accel.sh@20 -- # val= 00:06:30.846 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.846 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:30.846 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:30.846 21:21:53 -- accel/accel.sh@20 -- # val= 00:06:30.846 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.846 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:30.846 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:30.846 21:21:53 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.846 21:21:53 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:30.846 21:21:53 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.846 00:06:30.846 real 0m1.382s 00:06:30.846 user 0m1.254s 00:06:30.846 sys 0m0.141s 00:06:30.846 21:21:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:30.846 21:21:53 -- common/autotest_common.sh@10 -- # set +x 00:06:30.846 ************************************ 00:06:30.846 END TEST accel_decmop_full 00:06:30.846 ************************************ 00:06:30.846 21:21:53 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:30.846 21:21:53 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:30.846 21:21:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.846 21:21:53 -- common/autotest_common.sh@10 -- # set +x 00:06:30.846 ************************************ 00:06:30.846 START TEST accel_decomp_mcore 00:06:30.846 ************************************ 00:06:30.846 21:21:53 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:30.846 21:21:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.846 21:21:53 -- accel/accel.sh@17 -- # local accel_module 00:06:30.846 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:30.846 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:30.846 21:21:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:30.846 21:21:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:30.846 21:21:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.846 21:21:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.846 21:21:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.846 21:21:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.846 21:21:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.846 21:21:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.846 21:21:53 -- accel/accel.sh@40 -- # local IFS=, 00:06:30.846 21:21:53 -- accel/accel.sh@41 -- # jq -r . 00:06:30.846 [2024-04-24 21:21:53.605474] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:30.846 [2024-04-24 21:21:53.605547] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2695575 ] 00:06:30.846 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.846 [2024-04-24 21:21:53.677349] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.107 [2024-04-24 21:21:53.750062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.107 [2024-04-24 21:21:53.750160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.107 [2024-04-24 21:21:53.750246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.107 [2024-04-24 21:21:53.750248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.107 21:21:53 -- accel/accel.sh@20 -- # val= 00:06:31.107 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:31.107 21:21:53 -- accel/accel.sh@20 -- # val= 00:06:31.107 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:31.107 21:21:53 -- accel/accel.sh@20 -- # val= 00:06:31.107 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:31.107 21:21:53 -- accel/accel.sh@20 -- # val=0xf 00:06:31.107 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:31.107 21:21:53 -- accel/accel.sh@20 -- # val= 00:06:31.107 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:31.107 21:21:53 -- accel/accel.sh@20 -- # val= 00:06:31.107 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:31.107 21:21:53 -- accel/accel.sh@20 -- # val=decompress 00:06:31.107 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.107 21:21:53 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:31.107 21:21:53 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.107 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:31.107 21:21:53 -- accel/accel.sh@20 -- # val= 00:06:31.107 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:31.107 21:21:53 -- accel/accel.sh@20 -- # val=software 00:06:31.107 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.107 21:21:53 -- accel/accel.sh@22 -- # accel_module=software 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:31.107 21:21:53 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.107 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:31.107 21:21:53 -- accel/accel.sh@20 -- # val=32 00:06:31.107 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:31.107 21:21:53 -- accel/accel.sh@20 -- # val=32 00:06:31.107 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:31.107 21:21:53 -- accel/accel.sh@20 -- # val=1 00:06:31.107 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:31.107 21:21:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.107 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:31.107 21:21:53 -- accel/accel.sh@20 -- # val=Yes 00:06:31.107 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:31.107 21:21:53 -- accel/accel.sh@20 -- # val= 00:06:31.107 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:31.107 21:21:53 -- accel/accel.sh@20 -- # val= 00:06:31.107 21:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # IFS=: 00:06:31.107 21:21:53 -- accel/accel.sh@19 -- # read -r var val 00:06:32.093 21:21:54 -- accel/accel.sh@20 -- # val= 00:06:32.093 21:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.093 21:21:54 -- accel/accel.sh@19 -- # IFS=: 00:06:32.093 21:21:54 -- accel/accel.sh@19 -- # read -r var val 00:06:32.093 21:21:54 -- accel/accel.sh@20 -- # val= 00:06:32.093 21:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.093 21:21:54 -- accel/accel.sh@19 -- # IFS=: 00:06:32.093 21:21:54 -- accel/accel.sh@19 -- # read -r var val 00:06:32.093 21:21:54 -- accel/accel.sh@20 -- # val= 00:06:32.093 21:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.093 21:21:54 -- accel/accel.sh@19 -- # IFS=: 00:06:32.093 21:21:54 -- accel/accel.sh@19 -- # read -r var val 00:06:32.093 21:21:54 -- accel/accel.sh@20 -- # val= 00:06:32.093 21:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.093 21:21:54 -- accel/accel.sh@19 -- # IFS=: 00:06:32.093 21:21:54 -- accel/accel.sh@19 -- # read -r var val 00:06:32.093 21:21:54 -- accel/accel.sh@20 -- # val= 00:06:32.093 21:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.093 21:21:54 -- accel/accel.sh@19 -- # IFS=: 00:06:32.093 21:21:54 -- accel/accel.sh@19 -- # read -r var val 00:06:32.093 21:21:54 -- accel/accel.sh@20 -- # val= 00:06:32.093 21:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.093 21:21:54 -- accel/accel.sh@19 -- # IFS=: 00:06:32.093 21:21:54 -- accel/accel.sh@19 -- # read -r var val 00:06:32.093 21:21:54 -- accel/accel.sh@20 -- # val= 00:06:32.093 21:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.093 21:21:54 -- accel/accel.sh@19 -- # IFS=: 00:06:32.093 21:21:54 -- accel/accel.sh@19 -- # read -r var val 00:06:32.093 21:21:54 -- accel/accel.sh@20 -- # val= 00:06:32.093 21:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.093 21:21:54 -- accel/accel.sh@19 -- # IFS=: 00:06:32.093 21:21:54 -- accel/accel.sh@19 -- # read -r var val 00:06:32.093 21:21:54 -- accel/accel.sh@20 -- # val= 00:06:32.093 21:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.093 21:21:54 -- accel/accel.sh@19 -- # IFS=: 00:06:32.093 21:21:54 -- accel/accel.sh@19 -- # read -r var val 00:06:32.093 21:21:54 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.093 21:21:54 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:32.093 21:21:54 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.093 00:06:32.094 real 0m1.385s 00:06:32.094 user 0m4.585s 00:06:32.094 sys 0m0.147s 00:06:32.094 21:21:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:32.094 21:21:54 -- common/autotest_common.sh@10 -- # set +x 00:06:32.094 ************************************ 00:06:32.094 END TEST accel_decomp_mcore 00:06:32.094 ************************************ 00:06:32.353 21:21:54 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:32.353 21:21:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:32.353 21:21:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.353 21:21:54 -- common/autotest_common.sh@10 -- # set +x 00:06:32.353 ************************************ 00:06:32.353 START TEST accel_decomp_full_mcore 00:06:32.353 ************************************ 00:06:32.353 21:21:55 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:32.353 21:21:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.353 21:21:55 -- accel/accel.sh@17 -- # local accel_module 00:06:32.353 21:21:55 -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 21:21:55 -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 21:21:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:32.353 21:21:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:32.353 21:21:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.353 21:21:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.353 21:21:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.353 21:21:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.353 21:21:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.353 21:21:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.353 21:21:55 -- accel/accel.sh@40 -- # local IFS=, 00:06:32.353 21:21:55 -- accel/accel.sh@41 -- # jq -r . 00:06:32.353 [2024-04-24 21:21:55.176849] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:32.353 [2024-04-24 21:21:55.176912] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2695870 ] 00:06:32.353 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.612 [2024-04-24 21:21:55.250832] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:32.612 [2024-04-24 21:21:55.324186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.612 [2024-04-24 21:21:55.324283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.612 [2024-04-24 21:21:55.324345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.612 [2024-04-24 21:21:55.324347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.612 21:21:55 -- accel/accel.sh@20 -- # val= 00:06:32.612 21:21:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # IFS=: 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # read -r var val 00:06:32.612 21:21:55 -- accel/accel.sh@20 -- # val= 00:06:32.612 21:21:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # IFS=: 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # read -r var val 00:06:32.612 21:21:55 -- accel/accel.sh@20 -- # val= 00:06:32.612 21:21:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # IFS=: 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # read -r var val 00:06:32.612 21:21:55 -- accel/accel.sh@20 -- # val=0xf 00:06:32.612 21:21:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # IFS=: 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # read -r var val 00:06:32.612 21:21:55 -- accel/accel.sh@20 -- # val= 00:06:32.612 21:21:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # IFS=: 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # read -r var val 00:06:32.612 21:21:55 -- accel/accel.sh@20 -- # val= 00:06:32.612 21:21:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # IFS=: 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # read -r var val 00:06:32.612 21:21:55 -- accel/accel.sh@20 -- # val=decompress 00:06:32.612 21:21:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.612 21:21:55 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # IFS=: 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # read -r var val 00:06:32.612 21:21:55 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:32.612 21:21:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # IFS=: 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # read -r var val 00:06:32.612 21:21:55 -- accel/accel.sh@20 -- # val= 00:06:32.612 21:21:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # IFS=: 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # read -r var val 00:06:32.612 21:21:55 -- accel/accel.sh@20 -- # val=software 00:06:32.612 21:21:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.612 21:21:55 -- accel/accel.sh@22 -- # accel_module=software 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # IFS=: 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # read -r var val 00:06:32.612 21:21:55 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.612 21:21:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # IFS=: 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # read -r var val 00:06:32.612 21:21:55 -- accel/accel.sh@20 -- # val=32 00:06:32.612 21:21:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # IFS=: 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # read -r var val 00:06:32.612 21:21:55 -- accel/accel.sh@20 -- # val=32 00:06:32.612 21:21:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # IFS=: 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # read -r var val 00:06:32.612 21:21:55 -- accel/accel.sh@20 -- # val=1 00:06:32.612 21:21:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # IFS=: 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # read -r var val 00:06:32.612 21:21:55 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.612 21:21:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # IFS=: 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # read -r var val 00:06:32.612 21:21:55 -- accel/accel.sh@20 -- # val=Yes 00:06:32.612 21:21:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # IFS=: 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # read -r var val 00:06:32.612 21:21:55 -- accel/accel.sh@20 -- # val= 00:06:32.612 21:21:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # IFS=: 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # read -r var val 00:06:32.612 21:21:55 -- accel/accel.sh@20 -- # val= 00:06:32.612 21:21:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # IFS=: 00:06:32.612 21:21:55 -- accel/accel.sh@19 -- # read -r var val 00:06:33.993 21:21:56 -- accel/accel.sh@20 -- # val= 00:06:33.993 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:33.993 21:21:56 -- accel/accel.sh@20 -- # val= 00:06:33.993 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:33.993 21:21:56 -- accel/accel.sh@20 -- # val= 00:06:33.993 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:33.993 21:21:56 -- accel/accel.sh@20 -- # val= 00:06:33.993 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:33.993 21:21:56 -- accel/accel.sh@20 -- # val= 00:06:33.993 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:33.993 21:21:56 -- accel/accel.sh@20 -- # val= 00:06:33.993 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:33.993 21:21:56 -- accel/accel.sh@20 -- # val= 00:06:33.993 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:33.993 21:21:56 -- accel/accel.sh@20 -- # val= 00:06:33.993 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:33.993 21:21:56 -- accel/accel.sh@20 -- # val= 00:06:33.993 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:33.993 21:21:56 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.993 21:21:56 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:33.993 21:21:56 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.993 00:06:33.993 real 0m1.396s 00:06:33.993 user 0m4.617s 00:06:33.993 sys 0m0.147s 00:06:33.993 21:21:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.993 21:21:56 -- common/autotest_common.sh@10 -- # set +x 00:06:33.993 ************************************ 00:06:33.993 END TEST accel_decomp_full_mcore 00:06:33.993 ************************************ 00:06:33.993 21:21:56 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:33.993 21:21:56 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:33.993 21:21:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.993 21:21:56 -- common/autotest_common.sh@10 -- # set +x 00:06:33.993 ************************************ 00:06:33.993 START TEST accel_decomp_mthread 00:06:33.993 ************************************ 00:06:33.993 21:21:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:33.993 21:21:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.993 21:21:56 -- accel/accel.sh@17 -- # local accel_module 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:33.993 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:33.993 21:21:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:33.993 21:21:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:33.993 21:21:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.993 21:21:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.993 21:21:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.993 21:21:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.993 21:21:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.993 21:21:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.993 21:21:56 -- accel/accel.sh@40 -- # local IFS=, 00:06:33.993 21:21:56 -- accel/accel.sh@41 -- # jq -r . 00:06:33.993 [2024-04-24 21:21:56.749947] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:33.994 [2024-04-24 21:21:56.750018] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2696165 ] 00:06:33.994 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.994 [2024-04-24 21:21:56.822650] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.253 [2024-04-24 21:21:56.892014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.253 21:21:56 -- accel/accel.sh@20 -- # val= 00:06:34.253 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:34.253 21:21:56 -- accel/accel.sh@20 -- # val= 00:06:34.253 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:34.253 21:21:56 -- accel/accel.sh@20 -- # val= 00:06:34.253 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:34.253 21:21:56 -- accel/accel.sh@20 -- # val=0x1 00:06:34.253 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:34.253 21:21:56 -- accel/accel.sh@20 -- # val= 00:06:34.253 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:34.253 21:21:56 -- accel/accel.sh@20 -- # val= 00:06:34.253 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:34.253 21:21:56 -- accel/accel.sh@20 -- # val=decompress 00:06:34.253 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.253 21:21:56 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:34.253 21:21:56 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.253 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:34.253 21:21:56 -- accel/accel.sh@20 -- # val= 00:06:34.253 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:34.253 21:21:56 -- accel/accel.sh@20 -- # val=software 00:06:34.253 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.253 21:21:56 -- accel/accel.sh@22 -- # accel_module=software 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:34.253 21:21:56 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.253 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:34.253 21:21:56 -- accel/accel.sh@20 -- # val=32 00:06:34.253 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:34.253 21:21:56 -- accel/accel.sh@20 -- # val=32 00:06:34.253 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:34.253 21:21:56 -- accel/accel.sh@20 -- # val=2 00:06:34.253 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:34.253 21:21:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.253 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:34.253 21:21:56 -- accel/accel.sh@20 -- # val=Yes 00:06:34.253 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:34.253 21:21:56 -- accel/accel.sh@20 -- # val= 00:06:34.253 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:34.253 21:21:56 -- accel/accel.sh@20 -- # val= 00:06:34.253 21:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # IFS=: 00:06:34.253 21:21:56 -- accel/accel.sh@19 -- # read -r var val 00:06:35.630 21:21:58 -- accel/accel.sh@20 -- # val= 00:06:35.630 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.630 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.630 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.630 21:21:58 -- accel/accel.sh@20 -- # val= 00:06:35.630 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.630 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.630 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.630 21:21:58 -- accel/accel.sh@20 -- # val= 00:06:35.630 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.630 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.630 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.630 21:21:58 -- accel/accel.sh@20 -- # val= 00:06:35.630 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.630 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.630 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val= 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val= 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val= 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.631 21:21:58 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:35.631 21:21:58 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.631 00:06:35.631 real 0m1.373s 00:06:35.631 user 0m1.257s 00:06:35.631 sys 0m0.129s 00:06:35.631 21:21:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:35.631 21:21:58 -- common/autotest_common.sh@10 -- # set +x 00:06:35.631 ************************************ 00:06:35.631 END TEST accel_decomp_mthread 00:06:35.631 ************************************ 00:06:35.631 21:21:58 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:35.631 21:21:58 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:35.631 21:21:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.631 21:21:58 -- common/autotest_common.sh@10 -- # set +x 00:06:35.631 ************************************ 00:06:35.631 START TEST accel_deomp_full_mthread 00:06:35.631 ************************************ 00:06:35.631 21:21:58 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:35.631 21:21:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.631 21:21:58 -- accel/accel.sh@17 -- # local accel_module 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:35.631 21:21:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:35.631 21:21:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.631 21:21:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.631 21:21:58 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.631 21:21:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.631 21:21:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.631 21:21:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.631 21:21:58 -- accel/accel.sh@40 -- # local IFS=, 00:06:35.631 21:21:58 -- accel/accel.sh@41 -- # jq -r . 00:06:35.631 [2024-04-24 21:21:58.320830] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:35.631 [2024-04-24 21:21:58.320885] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2696463 ] 00:06:35.631 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.631 [2024-04-24 21:21:58.390341] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.631 [2024-04-24 21:21:58.458122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val= 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val= 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val= 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val=0x1 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val= 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val= 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val=decompress 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val= 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val=software 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@22 -- # accel_module=software 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val=32 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val=32 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val=2 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val=Yes 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val= 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:35.631 21:21:58 -- accel/accel.sh@20 -- # val= 00:06:35.631 21:21:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # IFS=: 00:06:35.631 21:21:58 -- accel/accel.sh@19 -- # read -r var val 00:06:37.011 21:21:59 -- accel/accel.sh@20 -- # val= 00:06:37.011 21:21:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.011 21:21:59 -- accel/accel.sh@19 -- # IFS=: 00:06:37.011 21:21:59 -- accel/accel.sh@19 -- # read -r var val 00:06:37.011 21:21:59 -- accel/accel.sh@20 -- # val= 00:06:37.011 21:21:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.011 21:21:59 -- accel/accel.sh@19 -- # IFS=: 00:06:37.011 21:21:59 -- accel/accel.sh@19 -- # read -r var val 00:06:37.011 21:21:59 -- accel/accel.sh@20 -- # val= 00:06:37.011 21:21:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.011 21:21:59 -- accel/accel.sh@19 -- # IFS=: 00:06:37.011 21:21:59 -- accel/accel.sh@19 -- # read -r var val 00:06:37.011 21:21:59 -- accel/accel.sh@20 -- # val= 00:06:37.011 21:21:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.011 21:21:59 -- accel/accel.sh@19 -- # IFS=: 00:06:37.011 21:21:59 -- accel/accel.sh@19 -- # read -r var val 00:06:37.011 21:21:59 -- accel/accel.sh@20 -- # val= 00:06:37.011 21:21:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.011 21:21:59 -- accel/accel.sh@19 -- # IFS=: 00:06:37.011 21:21:59 -- accel/accel.sh@19 -- # read -r var val 00:06:37.011 21:21:59 -- accel/accel.sh@20 -- # val= 00:06:37.011 21:21:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.011 21:21:59 -- accel/accel.sh@19 -- # IFS=: 00:06:37.011 21:21:59 -- accel/accel.sh@19 -- # read -r var val 00:06:37.011 21:21:59 -- accel/accel.sh@20 -- # val= 00:06:37.011 21:21:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.011 21:21:59 -- accel/accel.sh@19 -- # IFS=: 00:06:37.011 21:21:59 -- accel/accel.sh@19 -- # read -r var val 00:06:37.011 21:21:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.011 21:21:59 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.011 21:21:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.011 00:06:37.011 real 0m1.389s 00:06:37.011 user 0m1.268s 00:06:37.011 sys 0m0.134s 00:06:37.011 21:21:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.011 21:21:59 -- common/autotest_common.sh@10 -- # set +x 00:06:37.011 ************************************ 00:06:37.011 END TEST accel_deomp_full_mthread 00:06:37.011 ************************************ 00:06:37.011 21:21:59 -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:37.011 21:21:59 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:37.011 21:21:59 -- accel/accel.sh@137 -- # build_accel_config 00:06:37.011 21:21:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:37.011 21:21:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.011 21:21:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.011 21:21:59 -- common/autotest_common.sh@10 -- # set +x 00:06:37.011 21:21:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.011 21:21:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.011 21:21:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.011 21:21:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.011 21:21:59 -- accel/accel.sh@40 -- # local IFS=, 00:06:37.011 21:21:59 -- accel/accel.sh@41 -- # jq -r . 00:06:37.011 ************************************ 00:06:37.011 START TEST accel_dif_functional_tests 00:06:37.011 ************************************ 00:06:37.011 21:21:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:37.270 [2024-04-24 21:21:59.920896] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:37.270 [2024-04-24 21:21:59.920935] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2696757 ] 00:06:37.270 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.270 [2024-04-24 21:21:59.987970] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.270 [2024-04-24 21:22:00.075832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.270 [2024-04-24 21:22:00.075925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.270 [2024-04-24 21:22:00.075927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.270 00:06:37.270 00:06:37.270 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.270 http://cunit.sourceforge.net/ 00:06:37.270 00:06:37.270 00:06:37.270 Suite: accel_dif 00:06:37.270 Test: verify: DIF generated, GUARD check ...passed 00:06:37.270 Test: verify: DIF generated, APPTAG check ...passed 00:06:37.270 Test: verify: DIF generated, REFTAG check ...passed 00:06:37.270 Test: verify: DIF not generated, GUARD check ...[2024-04-24 21:22:00.143742] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:37.270 [2024-04-24 21:22:00.143790] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:37.270 passed 00:06:37.270 Test: verify: DIF not generated, APPTAG check ...[2024-04-24 21:22:00.143820] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:37.270 [2024-04-24 21:22:00.143837] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:37.270 passed 00:06:37.270 Test: verify: DIF not generated, REFTAG check ...[2024-04-24 21:22:00.143856] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:37.270 [2024-04-24 21:22:00.143874] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:37.270 passed 00:06:37.270 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:37.270 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-24 21:22:00.143920] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:37.270 passed 00:06:37.270 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:37.270 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:37.270 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:37.270 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-24 21:22:00.144023] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:37.270 passed 00:06:37.270 Test: generate copy: DIF generated, GUARD check ...passed 00:06:37.270 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:37.270 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:37.270 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:37.270 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:37.270 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:37.270 Test: generate copy: iovecs-len validate ...[2024-04-24 21:22:00.144190] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:37.270 passed 00:06:37.270 Test: generate copy: buffer alignment validate ...passed 00:06:37.270 00:06:37.270 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.270 suites 1 1 n/a 0 0 00:06:37.270 tests 20 20 20 0 0 00:06:37.270 asserts 204 204 204 0 n/a 00:06:37.270 00:06:37.270 Elapsed time = 0.000 seconds 00:06:37.530 00:06:37.530 real 0m0.453s 00:06:37.530 user 0m0.614s 00:06:37.530 sys 0m0.154s 00:06:37.530 21:22:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.530 21:22:00 -- common/autotest_common.sh@10 -- # set +x 00:06:37.530 ************************************ 00:06:37.530 END TEST accel_dif_functional_tests 00:06:37.530 ************************************ 00:06:37.530 00:06:37.530 real 0m34.989s 00:06:37.530 user 0m36.115s 00:06:37.530 sys 0m6.356s 00:06:37.530 21:22:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.530 21:22:00 -- common/autotest_common.sh@10 -- # set +x 00:06:37.530 ************************************ 00:06:37.530 END TEST accel 00:06:37.530 ************************************ 00:06:37.530 21:22:00 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:37.530 21:22:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:37.530 21:22:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.530 21:22:00 -- common/autotest_common.sh@10 -- # set +x 00:06:37.789 ************************************ 00:06:37.789 START TEST accel_rpc 00:06:37.789 ************************************ 00:06:37.789 21:22:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:37.789 * Looking for test storage... 00:06:37.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:37.789 21:22:00 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:37.789 21:22:00 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2697073 00:06:37.789 21:22:00 -- accel/accel_rpc.sh@15 -- # waitforlisten 2697073 00:06:37.789 21:22:00 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:37.789 21:22:00 -- common/autotest_common.sh@817 -- # '[' -z 2697073 ']' 00:06:37.789 21:22:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.789 21:22:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:37.789 21:22:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.789 21:22:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:37.789 21:22:00 -- common/autotest_common.sh@10 -- # set +x 00:06:38.049 [2024-04-24 21:22:00.720635] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:38.049 [2024-04-24 21:22:00.720695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2697073 ] 00:06:38.049 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.049 [2024-04-24 21:22:00.790562] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.049 [2024-04-24 21:22:00.861823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.618 21:22:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:38.618 21:22:01 -- common/autotest_common.sh@850 -- # return 0 00:06:38.618 21:22:01 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:38.618 21:22:01 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:38.618 21:22:01 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:38.618 21:22:01 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:38.618 21:22:01 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:38.618 21:22:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:38.618 21:22:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.948 21:22:01 -- common/autotest_common.sh@10 -- # set +x 00:06:38.948 ************************************ 00:06:38.948 START TEST accel_assign_opcode 00:06:38.948 ************************************ 00:06:38.948 21:22:01 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:06:38.948 21:22:01 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:38.948 21:22:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:38.948 21:22:01 -- common/autotest_common.sh@10 -- # set +x 00:06:38.948 [2024-04-24 21:22:01.668176] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:38.948 21:22:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:38.948 21:22:01 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:38.949 21:22:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:38.949 21:22:01 -- common/autotest_common.sh@10 -- # set +x 00:06:38.949 [2024-04-24 21:22:01.676184] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:38.949 21:22:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:38.949 21:22:01 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:38.949 21:22:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:38.949 21:22:01 -- common/autotest_common.sh@10 -- # set +x 00:06:39.208 21:22:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.209 21:22:01 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:39.209 21:22:01 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:39.209 21:22:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.209 21:22:01 -- common/autotest_common.sh@10 -- # set +x 00:06:39.209 21:22:01 -- accel/accel_rpc.sh@42 -- # grep software 00:06:39.209 21:22:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.209 software 00:06:39.209 00:06:39.209 real 0m0.233s 00:06:39.209 user 0m0.044s 00:06:39.209 sys 0m0.015s 00:06:39.209 21:22:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:39.209 21:22:01 -- common/autotest_common.sh@10 -- # set +x 00:06:39.209 ************************************ 00:06:39.209 END TEST accel_assign_opcode 00:06:39.209 ************************************ 00:06:39.209 21:22:01 -- accel/accel_rpc.sh@55 -- # killprocess 2697073 00:06:39.209 21:22:01 -- common/autotest_common.sh@936 -- # '[' -z 2697073 ']' 00:06:39.209 21:22:01 -- common/autotest_common.sh@940 -- # kill -0 2697073 00:06:39.209 21:22:01 -- common/autotest_common.sh@941 -- # uname 00:06:39.209 21:22:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:39.209 21:22:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2697073 00:06:39.209 21:22:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:39.209 21:22:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:39.209 21:22:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2697073' 00:06:39.209 killing process with pid 2697073 00:06:39.209 21:22:01 -- common/autotest_common.sh@955 -- # kill 2697073 00:06:39.209 21:22:01 -- common/autotest_common.sh@960 -- # wait 2697073 00:06:39.468 00:06:39.468 real 0m1.754s 00:06:39.468 user 0m1.829s 00:06:39.468 sys 0m0.538s 00:06:39.468 21:22:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:39.468 21:22:02 -- common/autotest_common.sh@10 -- # set +x 00:06:39.468 ************************************ 00:06:39.468 END TEST accel_rpc 00:06:39.468 ************************************ 00:06:39.468 21:22:02 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:39.468 21:22:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:39.468 21:22:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.468 21:22:02 -- common/autotest_common.sh@10 -- # set +x 00:06:39.728 ************************************ 00:06:39.728 START TEST app_cmdline 00:06:39.728 ************************************ 00:06:39.728 21:22:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:39.728 * Looking for test storage... 00:06:39.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:39.988 21:22:02 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:39.988 21:22:02 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2697440 00:06:39.988 21:22:02 -- app/cmdline.sh@18 -- # waitforlisten 2697440 00:06:39.988 21:22:02 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:39.988 21:22:02 -- common/autotest_common.sh@817 -- # '[' -z 2697440 ']' 00:06:39.988 21:22:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.988 21:22:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:39.988 21:22:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.988 21:22:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:39.988 21:22:02 -- common/autotest_common.sh@10 -- # set +x 00:06:39.988 [2024-04-24 21:22:02.674442] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:06:39.988 [2024-04-24 21:22:02.674498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2697440 ] 00:06:39.988 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.988 [2024-04-24 21:22:02.743826] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.988 [2024-04-24 21:22:02.812254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.924 21:22:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:40.924 21:22:03 -- common/autotest_common.sh@850 -- # return 0 00:06:40.924 21:22:03 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:40.924 { 00:06:40.924 "version": "SPDK v24.05-pre git sha1 7aadd6759", 00:06:40.924 "fields": { 00:06:40.924 "major": 24, 00:06:40.924 "minor": 5, 00:06:40.924 "patch": 0, 00:06:40.924 "suffix": "-pre", 00:06:40.924 "commit": "7aadd6759" 00:06:40.924 } 00:06:40.924 } 00:06:40.924 21:22:03 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:40.924 21:22:03 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:40.924 21:22:03 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:40.924 21:22:03 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:40.924 21:22:03 -- app/cmdline.sh@26 -- # sort 00:06:40.924 21:22:03 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:40.924 21:22:03 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:40.924 21:22:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:40.924 21:22:03 -- common/autotest_common.sh@10 -- # set +x 00:06:40.924 21:22:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.924 21:22:03 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:40.924 21:22:03 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:40.924 21:22:03 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.924 21:22:03 -- common/autotest_common.sh@638 -- # local es=0 00:06:40.924 21:22:03 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.924 21:22:03 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.924 21:22:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:40.924 21:22:03 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.924 21:22:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:40.924 21:22:03 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.924 21:22:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:40.924 21:22:03 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.924 21:22:03 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:40.924 21:22:03 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:41.184 request: 00:06:41.184 { 00:06:41.184 "method": "env_dpdk_get_mem_stats", 00:06:41.184 "req_id": 1 00:06:41.184 } 00:06:41.184 Got JSON-RPC error response 00:06:41.184 response: 00:06:41.184 { 00:06:41.184 "code": -32601, 00:06:41.184 "message": "Method not found" 00:06:41.184 } 00:06:41.184 21:22:03 -- common/autotest_common.sh@641 -- # es=1 00:06:41.184 21:22:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:41.184 21:22:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:41.184 21:22:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:41.184 21:22:03 -- app/cmdline.sh@1 -- # killprocess 2697440 00:06:41.184 21:22:03 -- common/autotest_common.sh@936 -- # '[' -z 2697440 ']' 00:06:41.184 21:22:03 -- common/autotest_common.sh@940 -- # kill -0 2697440 00:06:41.184 21:22:03 -- common/autotest_common.sh@941 -- # uname 00:06:41.184 21:22:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:41.184 21:22:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2697440 00:06:41.184 21:22:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:41.184 21:22:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:41.184 21:22:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2697440' 00:06:41.184 killing process with pid 2697440 00:06:41.184 21:22:03 -- common/autotest_common.sh@955 -- # kill 2697440 00:06:41.184 21:22:03 -- common/autotest_common.sh@960 -- # wait 2697440 00:06:41.443 00:06:41.443 real 0m1.716s 00:06:41.443 user 0m1.976s 00:06:41.443 sys 0m0.493s 00:06:41.443 21:22:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:41.443 21:22:04 -- common/autotest_common.sh@10 -- # set +x 00:06:41.444 ************************************ 00:06:41.444 END TEST app_cmdline 00:06:41.444 ************************************ 00:06:41.444 21:22:04 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:41.444 21:22:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:41.444 21:22:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.444 21:22:04 -- common/autotest_common.sh@10 -- # set +x 00:06:41.703 ************************************ 00:06:41.703 START TEST version 00:06:41.703 ************************************ 00:06:41.703 21:22:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:41.703 * Looking for test storage... 00:06:41.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:41.703 21:22:04 -- app/version.sh@17 -- # get_header_version major 00:06:41.703 21:22:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:41.703 21:22:04 -- app/version.sh@14 -- # cut -f2 00:06:41.703 21:22:04 -- app/version.sh@14 -- # tr -d '"' 00:06:41.703 21:22:04 -- app/version.sh@17 -- # major=24 00:06:41.703 21:22:04 -- app/version.sh@18 -- # get_header_version minor 00:06:41.703 21:22:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:41.703 21:22:04 -- app/version.sh@14 -- # cut -f2 00:06:41.703 21:22:04 -- app/version.sh@14 -- # tr -d '"' 00:06:41.703 21:22:04 -- app/version.sh@18 -- # minor=5 00:06:41.703 21:22:04 -- app/version.sh@19 -- # get_header_version patch 00:06:41.703 21:22:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:41.703 21:22:04 -- app/version.sh@14 -- # cut -f2 00:06:41.703 21:22:04 -- app/version.sh@14 -- # tr -d '"' 00:06:41.962 21:22:04 -- app/version.sh@19 -- # patch=0 00:06:41.962 21:22:04 -- app/version.sh@20 -- # get_header_version suffix 00:06:41.962 21:22:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:41.962 21:22:04 -- app/version.sh@14 -- # cut -f2 00:06:41.962 21:22:04 -- app/version.sh@14 -- # tr -d '"' 00:06:41.962 21:22:04 -- app/version.sh@20 -- # suffix=-pre 00:06:41.962 21:22:04 -- app/version.sh@22 -- # version=24.5 00:06:41.962 21:22:04 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:41.962 21:22:04 -- app/version.sh@28 -- # version=24.5rc0 00:06:41.962 21:22:04 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:41.962 21:22:04 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:41.962 21:22:04 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:41.962 21:22:04 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:41.962 00:06:41.962 real 0m0.194s 00:06:41.962 user 0m0.099s 00:06:41.962 sys 0m0.144s 00:06:41.962 21:22:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:41.962 21:22:04 -- common/autotest_common.sh@10 -- # set +x 00:06:41.962 ************************************ 00:06:41.962 END TEST version 00:06:41.962 ************************************ 00:06:41.962 21:22:04 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:41.962 21:22:04 -- spdk/autotest.sh@194 -- # uname -s 00:06:41.962 21:22:04 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:41.962 21:22:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:41.962 21:22:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:41.962 21:22:04 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:41.962 21:22:04 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:41.962 21:22:04 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:41.962 21:22:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:41.962 21:22:04 -- common/autotest_common.sh@10 -- # set +x 00:06:41.962 21:22:04 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:41.962 21:22:04 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:41.962 21:22:04 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:41.962 21:22:04 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:41.962 21:22:04 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:06:41.962 21:22:04 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:06:41.962 21:22:04 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:41.962 21:22:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:41.962 21:22:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.962 21:22:04 -- common/autotest_common.sh@10 -- # set +x 00:06:42.222 ************************************ 00:06:42.222 START TEST nvmf_tcp 00:06:42.222 ************************************ 00:06:42.222 21:22:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:42.222 * Looking for test storage... 00:06:42.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:42.222 21:22:05 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:42.222 21:22:05 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:42.222 21:22:05 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.222 21:22:05 -- nvmf/common.sh@7 -- # uname -s 00:06:42.222 21:22:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.222 21:22:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.222 21:22:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.222 21:22:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.222 21:22:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.222 21:22:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.222 21:22:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.222 21:22:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.222 21:22:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.222 21:22:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.222 21:22:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:42.222 21:22:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:42.222 21:22:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.222 21:22:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.222 21:22:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.222 21:22:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.222 21:22:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.222 21:22:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.222 21:22:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.222 21:22:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.222 21:22:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.222 21:22:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.222 21:22:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.222 21:22:05 -- paths/export.sh@5 -- # export PATH 00:06:42.222 21:22:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.222 21:22:05 -- nvmf/common.sh@47 -- # : 0 00:06:42.222 21:22:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:42.222 21:22:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:42.222 21:22:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.222 21:22:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.222 21:22:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.222 21:22:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:42.222 21:22:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:42.222 21:22:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:42.222 21:22:05 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:42.222 21:22:05 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:42.222 21:22:05 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:42.222 21:22:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:42.222 21:22:05 -- common/autotest_common.sh@10 -- # set +x 00:06:42.222 21:22:05 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:42.222 21:22:05 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:42.222 21:22:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:42.222 21:22:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.222 21:22:05 -- common/autotest_common.sh@10 -- # set +x 00:06:42.482 ************************************ 00:06:42.482 START TEST nvmf_example 00:06:42.482 ************************************ 00:06:42.482 21:22:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:42.482 * Looking for test storage... 00:06:42.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.483 21:22:05 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.483 21:22:05 -- nvmf/common.sh@7 -- # uname -s 00:06:42.483 21:22:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.483 21:22:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.483 21:22:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.483 21:22:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.483 21:22:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.483 21:22:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.483 21:22:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.483 21:22:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.483 21:22:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.483 21:22:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.483 21:22:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:42.483 21:22:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:42.483 21:22:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.483 21:22:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.483 21:22:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.483 21:22:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.483 21:22:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.483 21:22:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.483 21:22:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.483 21:22:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.483 21:22:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.483 21:22:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.483 21:22:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.483 21:22:05 -- paths/export.sh@5 -- # export PATH 00:06:42.483 21:22:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.483 21:22:05 -- nvmf/common.sh@47 -- # : 0 00:06:42.483 21:22:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:42.483 21:22:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:42.483 21:22:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.483 21:22:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.483 21:22:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.483 21:22:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:42.483 21:22:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:42.483 21:22:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:42.483 21:22:05 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:42.483 21:22:05 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:42.483 21:22:05 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:42.483 21:22:05 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:42.483 21:22:05 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:42.483 21:22:05 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:42.483 21:22:05 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:42.483 21:22:05 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:42.483 21:22:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:42.483 21:22:05 -- common/autotest_common.sh@10 -- # set +x 00:06:42.483 21:22:05 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:42.483 21:22:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:42.483 21:22:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:42.483 21:22:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:42.483 21:22:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:42.483 21:22:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:42.483 21:22:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.483 21:22:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:42.483 21:22:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.483 21:22:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:42.483 21:22:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:42.483 21:22:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:42.483 21:22:05 -- common/autotest_common.sh@10 -- # set +x 00:06:49.085 21:22:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:49.085 21:22:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:49.085 21:22:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:49.085 21:22:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:49.085 21:22:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:49.085 21:22:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:49.085 21:22:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:49.085 21:22:11 -- nvmf/common.sh@295 -- # net_devs=() 00:06:49.085 21:22:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:49.085 21:22:11 -- nvmf/common.sh@296 -- # e810=() 00:06:49.085 21:22:11 -- nvmf/common.sh@296 -- # local -ga e810 00:06:49.085 21:22:11 -- nvmf/common.sh@297 -- # x722=() 00:06:49.085 21:22:11 -- nvmf/common.sh@297 -- # local -ga x722 00:06:49.085 21:22:11 -- nvmf/common.sh@298 -- # mlx=() 00:06:49.085 21:22:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:49.085 21:22:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:49.085 21:22:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:49.085 21:22:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:49.085 21:22:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:49.085 21:22:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:49.085 21:22:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:49.085 21:22:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:49.085 21:22:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:49.085 21:22:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:49.085 21:22:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:49.085 21:22:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:49.085 21:22:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:49.085 21:22:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:49.085 21:22:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:49.085 21:22:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:49.085 21:22:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:49.085 21:22:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:49.085 21:22:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:49.085 21:22:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:49.085 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:49.085 21:22:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:49.085 21:22:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:49.085 21:22:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.085 21:22:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.085 21:22:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:49.085 21:22:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:49.085 21:22:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:49.085 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:49.085 21:22:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:49.085 21:22:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:49.085 21:22:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.085 21:22:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.085 21:22:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:49.085 21:22:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:49.085 21:22:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:49.085 21:22:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:49.085 21:22:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:49.085 21:22:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.085 21:22:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:49.085 21:22:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.085 21:22:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:49.085 Found net devices under 0000:af:00.0: cvl_0_0 00:06:49.085 21:22:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.085 21:22:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:49.085 21:22:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.085 21:22:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:49.085 21:22:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.085 21:22:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:49.085 Found net devices under 0000:af:00.1: cvl_0_1 00:06:49.085 21:22:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.085 21:22:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:49.085 21:22:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:49.085 21:22:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:49.085 21:22:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:49.085 21:22:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:49.085 21:22:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:49.085 21:22:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:49.085 21:22:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:49.085 21:22:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:49.085 21:22:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:49.085 21:22:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:49.086 21:22:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:49.086 21:22:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:49.086 21:22:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:49.086 21:22:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:49.086 21:22:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:49.086 21:22:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:49.345 21:22:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:49.345 21:22:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:49.345 21:22:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:49.345 21:22:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:49.345 21:22:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:49.604 21:22:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:49.604 21:22:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:49.604 21:22:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:49.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:49.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:06:49.604 00:06:49.604 --- 10.0.0.2 ping statistics --- 00:06:49.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.604 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:06:49.604 21:22:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:49.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:49.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:06:49.604 00:06:49.604 --- 10.0.0.1 ping statistics --- 00:06:49.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.604 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:06:49.604 21:22:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:49.604 21:22:12 -- nvmf/common.sh@411 -- # return 0 00:06:49.604 21:22:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:49.604 21:22:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:49.604 21:22:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:49.604 21:22:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:49.604 21:22:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:49.604 21:22:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:49.604 21:22:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:49.604 21:22:12 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:49.604 21:22:12 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:49.604 21:22:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:49.604 21:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:49.604 21:22:12 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:49.605 21:22:12 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:49.605 21:22:12 -- target/nvmf_example.sh@34 -- # nvmfpid=2701264 00:06:49.605 21:22:12 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:49.605 21:22:12 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:49.605 21:22:12 -- target/nvmf_example.sh@36 -- # waitforlisten 2701264 00:06:49.605 21:22:12 -- common/autotest_common.sh@817 -- # '[' -z 2701264 ']' 00:06:49.605 21:22:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.605 21:22:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:49.605 21:22:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.605 21:22:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:49.605 21:22:12 -- common/autotest_common.sh@10 -- # set +x 00:06:49.605 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.541 21:22:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:50.541 21:22:13 -- common/autotest_common.sh@850 -- # return 0 00:06:50.541 21:22:13 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:50.541 21:22:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:50.541 21:22:13 -- common/autotest_common.sh@10 -- # set +x 00:06:50.541 21:22:13 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:50.541 21:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:50.541 21:22:13 -- common/autotest_common.sh@10 -- # set +x 00:06:50.541 21:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:50.541 21:22:13 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:50.541 21:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:50.541 21:22:13 -- common/autotest_common.sh@10 -- # set +x 00:06:50.541 21:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:50.541 21:22:13 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:50.541 21:22:13 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:50.541 21:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:50.541 21:22:13 -- common/autotest_common.sh@10 -- # set +x 00:06:50.541 21:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:50.541 21:22:13 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:50.542 21:22:13 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:50.542 21:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:50.542 21:22:13 -- common/autotest_common.sh@10 -- # set +x 00:06:50.542 21:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:50.542 21:22:13 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:50.542 21:22:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:50.542 21:22:13 -- common/autotest_common.sh@10 -- # set +x 00:06:50.542 21:22:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:50.542 21:22:13 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:50.542 21:22:13 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:50.542 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.755 Initializing NVMe Controllers 00:07:02.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:02.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:02.755 Initialization complete. Launching workers. 00:07:02.755 ======================================================== 00:07:02.755 Latency(us) 00:07:02.755 Device Information : IOPS MiB/s Average min max 00:07:02.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14179.20 55.39 4513.92 674.19 16200.52 00:07:02.755 ======================================================== 00:07:02.755 Total : 14179.20 55.39 4513.92 674.19 16200.52 00:07:02.755 00:07:02.755 21:22:23 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:02.755 21:22:23 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:02.755 21:22:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:02.755 21:22:23 -- nvmf/common.sh@117 -- # sync 00:07:02.755 21:22:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:02.755 21:22:23 -- nvmf/common.sh@120 -- # set +e 00:07:02.755 21:22:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:02.755 21:22:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:02.755 rmmod nvme_tcp 00:07:02.755 rmmod nvme_fabrics 00:07:02.755 rmmod nvme_keyring 00:07:02.755 21:22:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:02.755 21:22:23 -- nvmf/common.sh@124 -- # set -e 00:07:02.755 21:22:23 -- nvmf/common.sh@125 -- # return 0 00:07:02.755 21:22:23 -- nvmf/common.sh@478 -- # '[' -n 2701264 ']' 00:07:02.755 21:22:23 -- nvmf/common.sh@479 -- # killprocess 2701264 00:07:02.755 21:22:23 -- common/autotest_common.sh@936 -- # '[' -z 2701264 ']' 00:07:02.755 21:22:23 -- common/autotest_common.sh@940 -- # kill -0 2701264 00:07:02.755 21:22:23 -- common/autotest_common.sh@941 -- # uname 00:07:02.755 21:22:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.755 21:22:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2701264 00:07:02.755 21:22:23 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:02.755 21:22:23 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:02.755 21:22:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2701264' 00:07:02.755 killing process with pid 2701264 00:07:02.755 21:22:23 -- common/autotest_common.sh@955 -- # kill 2701264 00:07:02.755 21:22:23 -- common/autotest_common.sh@960 -- # wait 2701264 00:07:02.755 nvmf threads initialize successfully 00:07:02.755 bdev subsystem init successfully 00:07:02.755 created a nvmf target service 00:07:02.755 create targets's poll groups done 00:07:02.755 all subsystems of target started 00:07:02.755 nvmf target is running 00:07:02.755 all subsystems of target stopped 00:07:02.755 destroy targets's poll groups done 00:07:02.755 destroyed the nvmf target service 00:07:02.755 bdev subsystem finish successfully 00:07:02.755 nvmf threads destroy successfully 00:07:02.755 21:22:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:02.755 21:22:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:02.755 21:22:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:02.755 21:22:23 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:02.755 21:22:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:02.755 21:22:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.755 21:22:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.755 21:22:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.325 21:22:25 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:03.325 21:22:25 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:03.325 21:22:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:03.325 21:22:25 -- common/autotest_common.sh@10 -- # set +x 00:07:03.325 00:07:03.325 real 0m20.781s 00:07:03.325 user 0m45.513s 00:07:03.325 sys 0m7.479s 00:07:03.325 21:22:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:03.325 21:22:25 -- common/autotest_common.sh@10 -- # set +x 00:07:03.325 ************************************ 00:07:03.325 END TEST nvmf_example 00:07:03.325 ************************************ 00:07:03.325 21:22:26 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:03.325 21:22:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:03.325 21:22:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.325 21:22:26 -- common/autotest_common.sh@10 -- # set +x 00:07:03.325 ************************************ 00:07:03.325 START TEST nvmf_filesystem 00:07:03.325 ************************************ 00:07:03.325 21:22:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:03.587 * Looking for test storage... 00:07:03.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.587 21:22:26 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:03.587 21:22:26 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:03.587 21:22:26 -- common/autotest_common.sh@34 -- # set -e 00:07:03.587 21:22:26 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:03.587 21:22:26 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:03.587 21:22:26 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:03.587 21:22:26 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:03.587 21:22:26 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:03.587 21:22:26 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:03.587 21:22:26 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:03.587 21:22:26 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:03.587 21:22:26 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:03.587 21:22:26 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:03.587 21:22:26 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:03.587 21:22:26 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:03.587 21:22:26 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:03.587 21:22:26 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:03.587 21:22:26 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:03.587 21:22:26 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:03.587 21:22:26 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:03.587 21:22:26 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:03.587 21:22:26 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:03.587 21:22:26 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:03.587 21:22:26 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:03.587 21:22:26 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:03.587 21:22:26 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:03.587 21:22:26 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:03.587 21:22:26 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:03.587 21:22:26 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:03.587 21:22:26 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:03.587 21:22:26 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:03.587 21:22:26 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:03.587 21:22:26 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:03.587 21:22:26 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:03.587 21:22:26 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:03.587 21:22:26 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:03.587 21:22:26 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:03.587 21:22:26 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:03.587 21:22:26 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:03.587 21:22:26 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:03.587 21:22:26 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:03.587 21:22:26 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:03.587 21:22:26 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:03.587 21:22:26 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:03.587 21:22:26 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:03.587 21:22:26 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:03.587 21:22:26 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:03.587 21:22:26 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:03.587 21:22:26 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:03.587 21:22:26 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:03.587 21:22:26 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:03.587 21:22:26 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:03.587 21:22:26 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:03.587 21:22:26 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:03.587 21:22:26 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:03.587 21:22:26 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:03.587 21:22:26 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:03.587 21:22:26 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:03.587 21:22:26 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:03.587 21:22:26 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:03.587 21:22:26 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:07:03.587 21:22:26 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:07:03.587 21:22:26 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:07:03.587 21:22:26 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:07:03.587 21:22:26 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:07:03.587 21:22:26 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:07:03.587 21:22:26 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:07:03.587 21:22:26 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:07:03.587 21:22:26 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:07:03.587 21:22:26 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:07:03.587 21:22:26 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:07:03.587 21:22:26 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:07:03.587 21:22:26 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:07:03.587 21:22:26 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:07:03.587 21:22:26 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:07:03.587 21:22:26 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:03.587 21:22:26 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:07:03.587 21:22:26 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:07:03.588 21:22:26 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:07:03.588 21:22:26 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:07:03.588 21:22:26 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:07:03.588 21:22:26 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:07:03.588 21:22:26 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:07:03.588 21:22:26 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:07:03.588 21:22:26 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:07:03.588 21:22:26 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:07:03.588 21:22:26 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:07:03.588 21:22:26 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:03.588 21:22:26 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:07:03.588 21:22:26 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:07:03.588 21:22:26 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:03.588 21:22:26 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:03.588 21:22:26 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:03.588 21:22:26 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:03.588 21:22:26 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:03.588 21:22:26 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:03.588 21:22:26 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:03.588 21:22:26 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:03.588 21:22:26 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:03.588 21:22:26 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:03.588 21:22:26 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:03.588 21:22:26 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:03.588 21:22:26 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:03.588 21:22:26 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:03.588 21:22:26 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:03.588 21:22:26 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:03.588 #define SPDK_CONFIG_H 00:07:03.588 #define SPDK_CONFIG_APPS 1 00:07:03.588 #define SPDK_CONFIG_ARCH native 00:07:03.588 #undef SPDK_CONFIG_ASAN 00:07:03.588 #undef SPDK_CONFIG_AVAHI 00:07:03.588 #undef SPDK_CONFIG_CET 00:07:03.588 #define SPDK_CONFIG_COVERAGE 1 00:07:03.588 #define SPDK_CONFIG_CROSS_PREFIX 00:07:03.588 #undef SPDK_CONFIG_CRYPTO 00:07:03.588 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:03.588 #undef SPDK_CONFIG_CUSTOMOCF 00:07:03.588 #undef SPDK_CONFIG_DAOS 00:07:03.588 #define SPDK_CONFIG_DAOS_DIR 00:07:03.588 #define SPDK_CONFIG_DEBUG 1 00:07:03.588 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:03.588 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:03.588 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:03.588 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:03.588 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:03.588 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:03.588 #define SPDK_CONFIG_EXAMPLES 1 00:07:03.588 #undef SPDK_CONFIG_FC 00:07:03.588 #define SPDK_CONFIG_FC_PATH 00:07:03.588 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:03.588 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:03.588 #undef SPDK_CONFIG_FUSE 00:07:03.588 #undef SPDK_CONFIG_FUZZER 00:07:03.588 #define SPDK_CONFIG_FUZZER_LIB 00:07:03.588 #undef SPDK_CONFIG_GOLANG 00:07:03.588 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:03.588 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:03.588 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:03.588 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:03.588 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:03.588 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:03.588 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:03.588 #define SPDK_CONFIG_IDXD 1 00:07:03.588 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:03.588 #undef SPDK_CONFIG_IPSEC_MB 00:07:03.588 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:03.588 #define SPDK_CONFIG_ISAL 1 00:07:03.588 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:03.588 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:03.588 #define SPDK_CONFIG_LIBDIR 00:07:03.588 #undef SPDK_CONFIG_LTO 00:07:03.588 #define SPDK_CONFIG_MAX_LCORES 00:07:03.588 #define SPDK_CONFIG_NVME_CUSE 1 00:07:03.588 #undef SPDK_CONFIG_OCF 00:07:03.588 #define SPDK_CONFIG_OCF_PATH 00:07:03.588 #define SPDK_CONFIG_OPENSSL_PATH 00:07:03.588 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:03.588 #define SPDK_CONFIG_PGO_DIR 00:07:03.588 #undef SPDK_CONFIG_PGO_USE 00:07:03.588 #define SPDK_CONFIG_PREFIX /usr/local 00:07:03.588 #undef SPDK_CONFIG_RAID5F 00:07:03.588 #undef SPDK_CONFIG_RBD 00:07:03.588 #define SPDK_CONFIG_RDMA 1 00:07:03.588 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:03.588 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:03.588 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:03.588 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:03.588 #define SPDK_CONFIG_SHARED 1 00:07:03.588 #undef SPDK_CONFIG_SMA 00:07:03.588 #define SPDK_CONFIG_TESTS 1 00:07:03.588 #undef SPDK_CONFIG_TSAN 00:07:03.588 #define SPDK_CONFIG_UBLK 1 00:07:03.588 #define SPDK_CONFIG_UBSAN 1 00:07:03.588 #undef SPDK_CONFIG_UNIT_TESTS 00:07:03.588 #undef SPDK_CONFIG_URING 00:07:03.588 #define SPDK_CONFIG_URING_PATH 00:07:03.588 #undef SPDK_CONFIG_URING_ZNS 00:07:03.588 #undef SPDK_CONFIG_USDT 00:07:03.588 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:03.588 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:03.588 #define SPDK_CONFIG_VFIO_USER 1 00:07:03.588 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:03.588 #define SPDK_CONFIG_VHOST 1 00:07:03.588 #define SPDK_CONFIG_VIRTIO 1 00:07:03.588 #undef SPDK_CONFIG_VTUNE 00:07:03.588 #define SPDK_CONFIG_VTUNE_DIR 00:07:03.588 #define SPDK_CONFIG_WERROR 1 00:07:03.588 #define SPDK_CONFIG_WPDK_DIR 00:07:03.588 #undef SPDK_CONFIG_XNVME 00:07:03.588 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:03.588 21:22:26 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:03.588 21:22:26 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:03.588 21:22:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.588 21:22:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.588 21:22:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.588 21:22:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.588 21:22:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.588 21:22:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.589 21:22:26 -- paths/export.sh@5 -- # export PATH 00:07:03.589 21:22:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.589 21:22:26 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:03.589 21:22:26 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:03.589 21:22:26 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:03.589 21:22:26 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:03.589 21:22:26 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:03.589 21:22:26 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:03.589 21:22:26 -- pm/common@67 -- # TEST_TAG=N/A 00:07:03.589 21:22:26 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:03.589 21:22:26 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:03.589 21:22:26 -- pm/common@71 -- # uname -s 00:07:03.589 21:22:26 -- pm/common@71 -- # PM_OS=Linux 00:07:03.589 21:22:26 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:03.589 21:22:26 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:07:03.589 21:22:26 -- pm/common@76 -- # [[ Linux == Linux ]] 00:07:03.589 21:22:26 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:07:03.589 21:22:26 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:07:03.589 21:22:26 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:03.589 21:22:26 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:03.589 21:22:26 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:07:03.589 21:22:26 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:07:03.589 21:22:26 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:03.589 21:22:26 -- common/autotest_common.sh@57 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:03.589 21:22:26 -- common/autotest_common.sh@61 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:03.589 21:22:26 -- common/autotest_common.sh@63 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:03.589 21:22:26 -- common/autotest_common.sh@65 -- # : 1 00:07:03.589 21:22:26 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:03.589 21:22:26 -- common/autotest_common.sh@67 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:03.589 21:22:26 -- common/autotest_common.sh@69 -- # : 00:07:03.589 21:22:26 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:03.589 21:22:26 -- common/autotest_common.sh@71 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:03.589 21:22:26 -- common/autotest_common.sh@73 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:03.589 21:22:26 -- common/autotest_common.sh@75 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:03.589 21:22:26 -- common/autotest_common.sh@77 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:03.589 21:22:26 -- common/autotest_common.sh@79 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:03.589 21:22:26 -- common/autotest_common.sh@81 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:03.589 21:22:26 -- common/autotest_common.sh@83 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:03.589 21:22:26 -- common/autotest_common.sh@85 -- # : 1 00:07:03.589 21:22:26 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:03.589 21:22:26 -- common/autotest_common.sh@87 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:03.589 21:22:26 -- common/autotest_common.sh@89 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:03.589 21:22:26 -- common/autotest_common.sh@91 -- # : 1 00:07:03.589 21:22:26 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:03.589 21:22:26 -- common/autotest_common.sh@93 -- # : 1 00:07:03.589 21:22:26 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:03.589 21:22:26 -- common/autotest_common.sh@95 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:03.589 21:22:26 -- common/autotest_common.sh@97 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:03.589 21:22:26 -- common/autotest_common.sh@99 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:03.589 21:22:26 -- common/autotest_common.sh@101 -- # : tcp 00:07:03.589 21:22:26 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:03.589 21:22:26 -- common/autotest_common.sh@103 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:03.589 21:22:26 -- common/autotest_common.sh@105 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:03.589 21:22:26 -- common/autotest_common.sh@107 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:03.589 21:22:26 -- common/autotest_common.sh@109 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:03.589 21:22:26 -- common/autotest_common.sh@111 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:03.589 21:22:26 -- common/autotest_common.sh@113 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:03.589 21:22:26 -- common/autotest_common.sh@115 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:03.589 21:22:26 -- common/autotest_common.sh@117 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:03.589 21:22:26 -- common/autotest_common.sh@119 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:03.589 21:22:26 -- common/autotest_common.sh@121 -- # : 1 00:07:03.589 21:22:26 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:03.589 21:22:26 -- common/autotest_common.sh@123 -- # : 00:07:03.589 21:22:26 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:03.589 21:22:26 -- common/autotest_common.sh@125 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:03.589 21:22:26 -- common/autotest_common.sh@127 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:03.589 21:22:26 -- common/autotest_common.sh@129 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:03.589 21:22:26 -- common/autotest_common.sh@131 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:03.589 21:22:26 -- common/autotest_common.sh@133 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:03.589 21:22:26 -- common/autotest_common.sh@135 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:03.589 21:22:26 -- common/autotest_common.sh@137 -- # : 00:07:03.589 21:22:26 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:03.589 21:22:26 -- common/autotest_common.sh@139 -- # : true 00:07:03.589 21:22:26 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:03.589 21:22:26 -- common/autotest_common.sh@141 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:03.589 21:22:26 -- common/autotest_common.sh@143 -- # : 0 00:07:03.589 21:22:26 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:03.589 21:22:26 -- common/autotest_common.sh@145 -- # : 0 00:07:03.590 21:22:26 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:03.590 21:22:26 -- common/autotest_common.sh@147 -- # : 0 00:07:03.590 21:22:26 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:03.590 21:22:26 -- common/autotest_common.sh@149 -- # : 0 00:07:03.590 21:22:26 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:03.590 21:22:26 -- common/autotest_common.sh@151 -- # : 0 00:07:03.590 21:22:26 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:03.590 21:22:26 -- common/autotest_common.sh@153 -- # : e810 00:07:03.590 21:22:26 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:03.590 21:22:26 -- common/autotest_common.sh@155 -- # : 0 00:07:03.590 21:22:26 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:03.590 21:22:26 -- common/autotest_common.sh@157 -- # : 0 00:07:03.590 21:22:26 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:03.590 21:22:26 -- common/autotest_common.sh@159 -- # : 0 00:07:03.590 21:22:26 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:03.590 21:22:26 -- common/autotest_common.sh@161 -- # : 0 00:07:03.590 21:22:26 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:03.590 21:22:26 -- common/autotest_common.sh@163 -- # : 0 00:07:03.590 21:22:26 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:03.590 21:22:26 -- common/autotest_common.sh@166 -- # : 00:07:03.590 21:22:26 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:03.590 21:22:26 -- common/autotest_common.sh@168 -- # : 0 00:07:03.590 21:22:26 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:03.590 21:22:26 -- common/autotest_common.sh@170 -- # : 0 00:07:03.590 21:22:26 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:03.590 21:22:26 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:03.590 21:22:26 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:03.590 21:22:26 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:03.590 21:22:26 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:03.590 21:22:26 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:03.590 21:22:26 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:03.590 21:22:26 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:03.590 21:22:26 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:03.590 21:22:26 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:03.590 21:22:26 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:03.590 21:22:26 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:03.590 21:22:26 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:03.590 21:22:26 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:03.590 21:22:26 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:03.590 21:22:26 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:03.590 21:22:26 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:03.590 21:22:26 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:03.590 21:22:26 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:03.590 21:22:26 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:03.590 21:22:26 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:03.590 21:22:26 -- common/autotest_common.sh@199 -- # cat 00:07:03.590 21:22:26 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:07:03.590 21:22:26 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:03.590 21:22:26 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:03.590 21:22:26 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:03.590 21:22:26 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:03.590 21:22:26 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:07:03.590 21:22:26 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:07:03.590 21:22:26 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:03.590 21:22:26 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:03.590 21:22:26 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:03.590 21:22:26 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:03.590 21:22:26 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:03.590 21:22:26 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:03.590 21:22:26 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:03.590 21:22:26 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:03.590 21:22:26 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:03.590 21:22:26 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:03.590 21:22:26 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:03.590 21:22:26 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:03.590 21:22:26 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:07:03.590 21:22:26 -- common/autotest_common.sh@252 -- # export valgrind= 00:07:03.590 21:22:26 -- common/autotest_common.sh@252 -- # valgrind= 00:07:03.590 21:22:26 -- common/autotest_common.sh@258 -- # uname -s 00:07:03.590 21:22:26 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:07:03.590 21:22:26 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:07:03.590 21:22:26 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:07:03.590 21:22:26 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:07:03.590 21:22:26 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:03.590 21:22:26 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:03.590 21:22:26 -- common/autotest_common.sh@268 -- # MAKE=make 00:07:03.590 21:22:26 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j112 00:07:03.590 21:22:26 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:07:03.590 21:22:26 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:07:03.590 21:22:26 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:07:03.590 21:22:26 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:07:03.590 21:22:26 -- common/autotest_common.sh@289 -- # for i in "$@" 00:07:03.590 21:22:26 -- common/autotest_common.sh@290 -- # case "$i" in 00:07:03.590 21:22:26 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:07:03.591 21:22:26 -- common/autotest_common.sh@307 -- # [[ -z 2703783 ]] 00:07:03.591 21:22:26 -- common/autotest_common.sh@307 -- # kill -0 2703783 00:07:03.591 21:22:26 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:07:03.591 21:22:26 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:07:03.591 21:22:26 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:07:03.591 21:22:26 -- common/autotest_common.sh@320 -- # local mount target_dir 00:07:03.591 21:22:26 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:07:03.591 21:22:26 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:07:03.591 21:22:26 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:07:03.591 21:22:26 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:07:03.591 21:22:26 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.DYjB99 00:07:03.591 21:22:26 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:03.591 21:22:26 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:07:03.591 21:22:26 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:07:03.591 21:22:26 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.DYjB99/tests/target /tmp/spdk.DYjB99 00:07:03.591 21:22:26 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:07:03.591 21:22:26 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:03.591 21:22:26 -- common/autotest_common.sh@316 -- # df -T 00:07:03.591 21:22:26 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:07:03.591 21:22:26 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:07:03.591 21:22:26 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:07:03.591 21:22:26 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:07:03.591 21:22:26 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:07:03.591 21:22:26 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:07:03.591 21:22:26 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:03.591 21:22:26 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:07:03.591 21:22:26 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:07:03.591 21:22:26 -- common/autotest_common.sh@351 -- # avails["$mount"]=995438592 00:07:03.591 21:22:26 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:07:03.591 21:22:26 -- common/autotest_common.sh@352 -- # uses["$mount"]=4288991232 00:07:03.591 21:22:26 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:03.591 21:22:26 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:07:03.591 21:22:26 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:07:03.591 21:22:26 -- common/autotest_common.sh@351 -- # avails["$mount"]=52256759808 00:07:03.591 21:22:26 -- common/autotest_common.sh@351 -- # sizes["$mount"]=61742301184 00:07:03.591 21:22:26 -- common/autotest_common.sh@352 -- # uses["$mount"]=9485541376 00:07:03.591 21:22:26 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:03.591 21:22:26 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:03.591 21:22:26 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:03.591 21:22:26 -- common/autotest_common.sh@351 -- # avails["$mount"]=30815514624 00:07:03.591 21:22:26 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30871150592 00:07:03.591 21:22:26 -- common/autotest_common.sh@352 -- # uses["$mount"]=55635968 00:07:03.591 21:22:26 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:03.591 21:22:26 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:03.591 21:22:26 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:03.591 21:22:26 -- common/autotest_common.sh@351 -- # avails["$mount"]=12339077120 00:07:03.591 21:22:26 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12348461056 00:07:03.591 21:22:26 -- common/autotest_common.sh@352 -- # uses["$mount"]=9383936 00:07:03.591 21:22:26 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:03.591 21:22:26 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:03.591 21:22:26 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:03.591 21:22:26 -- common/autotest_common.sh@351 -- # avails["$mount"]=30870315008 00:07:03.591 21:22:26 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30871150592 00:07:03.591 21:22:26 -- common/autotest_common.sh@352 -- # uses["$mount"]=835584 00:07:03.591 21:22:26 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:03.591 21:22:26 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:03.591 21:22:26 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:03.591 21:22:26 -- common/autotest_common.sh@351 -- # avails["$mount"]=6174224384 00:07:03.591 21:22:26 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6174228480 00:07:03.591 21:22:26 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:07:03.591 21:22:26 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:03.591 21:22:26 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:07:03.591 * Looking for test storage... 00:07:03.591 21:22:26 -- common/autotest_common.sh@357 -- # local target_space new_size 00:07:03.591 21:22:26 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:07:03.591 21:22:26 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.591 21:22:26 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:03.591 21:22:26 -- common/autotest_common.sh@361 -- # mount=/ 00:07:03.591 21:22:26 -- common/autotest_common.sh@363 -- # target_space=52256759808 00:07:03.591 21:22:26 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:07:03.591 21:22:26 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:07:03.591 21:22:26 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:07:03.591 21:22:26 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:07:03.591 21:22:26 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:07:03.591 21:22:26 -- common/autotest_common.sh@370 -- # new_size=11700133888 00:07:03.591 21:22:26 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:03.591 21:22:26 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.591 21:22:26 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.591 21:22:26 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.591 21:22:26 -- common/autotest_common.sh@378 -- # return 0 00:07:03.591 21:22:26 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:07:03.591 21:22:26 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:07:03.591 21:22:26 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:03.591 21:22:26 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:03.591 21:22:26 -- common/autotest_common.sh@1673 -- # true 00:07:03.591 21:22:26 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:07:03.591 21:22:26 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:03.591 21:22:26 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:03.591 21:22:26 -- common/autotest_common.sh@27 -- # exec 00:07:03.591 21:22:26 -- common/autotest_common.sh@29 -- # exec 00:07:03.591 21:22:26 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:03.591 21:22:26 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:03.591 21:22:26 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:03.591 21:22:26 -- common/autotest_common.sh@18 -- # set -x 00:07:03.591 21:22:26 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:03.591 21:22:26 -- nvmf/common.sh@7 -- # uname -s 00:07:03.591 21:22:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.591 21:22:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.591 21:22:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.591 21:22:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.591 21:22:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.591 21:22:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.591 21:22:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.591 21:22:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.591 21:22:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.591 21:22:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.591 21:22:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:03.591 21:22:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:03.591 21:22:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.591 21:22:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.592 21:22:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:03.592 21:22:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.592 21:22:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:03.592 21:22:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.592 21:22:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.592 21:22:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.592 21:22:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.592 21:22:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.592 21:22:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.592 21:22:26 -- paths/export.sh@5 -- # export PATH 00:07:03.592 21:22:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.592 21:22:26 -- nvmf/common.sh@47 -- # : 0 00:07:03.592 21:22:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:03.592 21:22:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:03.592 21:22:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.592 21:22:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.592 21:22:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.592 21:22:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:03.592 21:22:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:03.592 21:22:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:03.592 21:22:26 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:03.592 21:22:26 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:03.592 21:22:26 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:03.592 21:22:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:03.592 21:22:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:03.592 21:22:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:03.592 21:22:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:03.592 21:22:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:03.592 21:22:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.592 21:22:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:03.592 21:22:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.592 21:22:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:03.592 21:22:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:03.592 21:22:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:03.592 21:22:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.166 21:22:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:10.166 21:22:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:10.166 21:22:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:10.166 21:22:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:10.166 21:22:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:10.166 21:22:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:10.166 21:22:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:10.166 21:22:32 -- nvmf/common.sh@295 -- # net_devs=() 00:07:10.166 21:22:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:10.166 21:22:32 -- nvmf/common.sh@296 -- # e810=() 00:07:10.166 21:22:32 -- nvmf/common.sh@296 -- # local -ga e810 00:07:10.166 21:22:32 -- nvmf/common.sh@297 -- # x722=() 00:07:10.166 21:22:32 -- nvmf/common.sh@297 -- # local -ga x722 00:07:10.166 21:22:32 -- nvmf/common.sh@298 -- # mlx=() 00:07:10.166 21:22:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:10.166 21:22:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.166 21:22:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.166 21:22:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.166 21:22:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.166 21:22:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.166 21:22:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.166 21:22:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.166 21:22:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.166 21:22:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.166 21:22:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.166 21:22:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.166 21:22:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:10.166 21:22:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:10.166 21:22:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:10.166 21:22:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:10.166 21:22:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:10.166 21:22:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:10.166 21:22:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:10.166 21:22:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:10.166 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:10.166 21:22:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:10.166 21:22:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:10.166 21:22:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.166 21:22:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.166 21:22:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:10.166 21:22:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:10.166 21:22:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:10.166 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:10.166 21:22:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:10.166 21:22:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:10.166 21:22:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.166 21:22:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.166 21:22:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:10.166 21:22:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:10.166 21:22:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:10.166 21:22:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:10.166 21:22:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:10.166 21:22:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.166 21:22:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:10.166 21:22:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.166 21:22:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:10.166 Found net devices under 0000:af:00.0: cvl_0_0 00:07:10.166 21:22:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.166 21:22:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:10.166 21:22:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.166 21:22:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:10.166 21:22:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.166 21:22:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:10.166 Found net devices under 0000:af:00.1: cvl_0_1 00:07:10.166 21:22:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.166 21:22:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:10.166 21:22:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:10.166 21:22:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:10.166 21:22:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:10.166 21:22:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:10.166 21:22:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.166 21:22:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.166 21:22:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.166 21:22:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:10.166 21:22:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.166 21:22:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.166 21:22:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:10.166 21:22:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.166 21:22:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.166 21:22:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:10.166 21:22:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:10.166 21:22:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.166 21:22:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.428 21:22:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.428 21:22:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.428 21:22:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:10.428 21:22:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:10.428 21:22:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:10.428 21:22:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:10.428 21:22:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:10.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:07:10.428 00:07:10.428 --- 10.0.0.2 ping statistics --- 00:07:10.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.428 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:07:10.428 21:22:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:10.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:07:10.428 00:07:10.428 --- 10.0.0.1 ping statistics --- 00:07:10.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.428 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:07:10.428 21:22:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.428 21:22:33 -- nvmf/common.sh@411 -- # return 0 00:07:10.428 21:22:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:10.428 21:22:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.428 21:22:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:10.428 21:22:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:10.428 21:22:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.428 21:22:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:10.428 21:22:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:10.428 21:22:33 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:10.428 21:22:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:10.428 21:22:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.428 21:22:33 -- common/autotest_common.sh@10 -- # set +x 00:07:10.687 ************************************ 00:07:10.687 START TEST nvmf_filesystem_no_in_capsule 00:07:10.687 ************************************ 00:07:10.687 21:22:33 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:07:10.687 21:22:33 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:10.687 21:22:33 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:10.687 21:22:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:10.687 21:22:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:10.687 21:22:33 -- common/autotest_common.sh@10 -- # set +x 00:07:10.687 21:22:33 -- nvmf/common.sh@470 -- # nvmfpid=2707177 00:07:10.687 21:22:33 -- nvmf/common.sh@471 -- # waitforlisten 2707177 00:07:10.687 21:22:33 -- common/autotest_common.sh@817 -- # '[' -z 2707177 ']' 00:07:10.687 21:22:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.687 21:22:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:10.687 21:22:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.687 21:22:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:10.687 21:22:33 -- common/autotest_common.sh@10 -- # set +x 00:07:10.687 21:22:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:10.687 [2024-04-24 21:22:33.455290] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:07:10.687 [2024-04-24 21:22:33.455333] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.687 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.687 [2024-04-24 21:22:33.530074] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.946 [2024-04-24 21:22:33.604824] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.946 [2024-04-24 21:22:33.604860] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.946 [2024-04-24 21:22:33.604869] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.946 [2024-04-24 21:22:33.604878] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.946 [2024-04-24 21:22:33.604884] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.946 [2024-04-24 21:22:33.604933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.946 [2024-04-24 21:22:33.605028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.946 [2024-04-24 21:22:33.605112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.946 [2024-04-24 21:22:33.605113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.515 21:22:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:11.515 21:22:34 -- common/autotest_common.sh@850 -- # return 0 00:07:11.515 21:22:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:11.515 21:22:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:11.515 21:22:34 -- common/autotest_common.sh@10 -- # set +x 00:07:11.516 21:22:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.516 21:22:34 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:11.516 21:22:34 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:11.516 21:22:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.516 21:22:34 -- common/autotest_common.sh@10 -- # set +x 00:07:11.516 [2024-04-24 21:22:34.311338] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.516 21:22:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.516 21:22:34 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:11.516 21:22:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.516 21:22:34 -- common/autotest_common.sh@10 -- # set +x 00:07:11.776 Malloc1 00:07:11.776 21:22:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.776 21:22:34 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:11.776 21:22:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.776 21:22:34 -- common/autotest_common.sh@10 -- # set +x 00:07:11.776 21:22:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.776 21:22:34 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:11.776 21:22:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.776 21:22:34 -- common/autotest_common.sh@10 -- # set +x 00:07:11.776 21:22:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.776 21:22:34 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.776 21:22:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.776 21:22:34 -- common/autotest_common.sh@10 -- # set +x 00:07:11.776 [2024-04-24 21:22:34.463553] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.776 21:22:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.776 21:22:34 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:11.776 21:22:34 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:11.776 21:22:34 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:11.776 21:22:34 -- common/autotest_common.sh@1366 -- # local bs 00:07:11.776 21:22:34 -- common/autotest_common.sh@1367 -- # local nb 00:07:11.776 21:22:34 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:11.776 21:22:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.776 21:22:34 -- common/autotest_common.sh@10 -- # set +x 00:07:11.776 21:22:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.776 21:22:34 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:11.776 { 00:07:11.776 "name": "Malloc1", 00:07:11.776 "aliases": [ 00:07:11.776 "6adbc3f3-bbca-4b1f-a9d5-45c69f1b1200" 00:07:11.776 ], 00:07:11.776 "product_name": "Malloc disk", 00:07:11.776 "block_size": 512, 00:07:11.776 "num_blocks": 1048576, 00:07:11.776 "uuid": "6adbc3f3-bbca-4b1f-a9d5-45c69f1b1200", 00:07:11.776 "assigned_rate_limits": { 00:07:11.776 "rw_ios_per_sec": 0, 00:07:11.776 "rw_mbytes_per_sec": 0, 00:07:11.776 "r_mbytes_per_sec": 0, 00:07:11.776 "w_mbytes_per_sec": 0 00:07:11.776 }, 00:07:11.776 "claimed": true, 00:07:11.776 "claim_type": "exclusive_write", 00:07:11.776 "zoned": false, 00:07:11.776 "supported_io_types": { 00:07:11.776 "read": true, 00:07:11.776 "write": true, 00:07:11.776 "unmap": true, 00:07:11.776 "write_zeroes": true, 00:07:11.776 "flush": true, 00:07:11.776 "reset": true, 00:07:11.776 "compare": false, 00:07:11.776 "compare_and_write": false, 00:07:11.776 "abort": true, 00:07:11.776 "nvme_admin": false, 00:07:11.776 "nvme_io": false 00:07:11.776 }, 00:07:11.776 "memory_domains": [ 00:07:11.776 { 00:07:11.776 "dma_device_id": "system", 00:07:11.776 "dma_device_type": 1 00:07:11.776 }, 00:07:11.776 { 00:07:11.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.776 "dma_device_type": 2 00:07:11.776 } 00:07:11.776 ], 00:07:11.776 "driver_specific": {} 00:07:11.776 } 00:07:11.776 ]' 00:07:11.776 21:22:34 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:11.776 21:22:34 -- common/autotest_common.sh@1369 -- # bs=512 00:07:11.776 21:22:34 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:11.776 21:22:34 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:11.776 21:22:34 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:11.776 21:22:34 -- common/autotest_common.sh@1374 -- # echo 512 00:07:11.776 21:22:34 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:11.776 21:22:34 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:13.154 21:22:35 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:13.154 21:22:35 -- common/autotest_common.sh@1184 -- # local i=0 00:07:13.154 21:22:35 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:13.154 21:22:35 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:13.154 21:22:35 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:15.693 21:22:37 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:15.693 21:22:37 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:15.693 21:22:37 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:15.693 21:22:37 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:15.693 21:22:37 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:15.693 21:22:37 -- common/autotest_common.sh@1194 -- # return 0 00:07:15.693 21:22:37 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:15.693 21:22:37 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:15.693 21:22:38 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:15.693 21:22:38 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:15.693 21:22:38 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:15.693 21:22:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:15.693 21:22:38 -- setup/common.sh@80 -- # echo 536870912 00:07:15.693 21:22:38 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:15.693 21:22:38 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:15.693 21:22:38 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:15.693 21:22:38 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:15.693 21:22:38 -- target/filesystem.sh@69 -- # partprobe 00:07:15.952 21:22:38 -- target/filesystem.sh@70 -- # sleep 1 00:07:16.887 21:22:39 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:16.887 21:22:39 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:16.887 21:22:39 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:16.887 21:22:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.887 21:22:39 -- common/autotest_common.sh@10 -- # set +x 00:07:17.145 ************************************ 00:07:17.145 START TEST filesystem_ext4 00:07:17.145 ************************************ 00:07:17.145 21:22:39 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:17.145 21:22:39 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:17.145 21:22:39 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:17.145 21:22:39 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:17.145 21:22:39 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:17.145 21:22:39 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:17.145 21:22:39 -- common/autotest_common.sh@914 -- # local i=0 00:07:17.145 21:22:39 -- common/autotest_common.sh@915 -- # local force 00:07:17.145 21:22:39 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:17.145 21:22:39 -- common/autotest_common.sh@918 -- # force=-F 00:07:17.145 21:22:39 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:17.145 mke2fs 1.46.5 (30-Dec-2021) 00:07:17.145 Discarding device blocks: 0/522240 done 00:07:17.403 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:17.403 Filesystem UUID: 8be7e7b5-213f-4c1e-b888-63f57a83bdaf 00:07:17.403 Superblock backups stored on blocks: 00:07:17.403 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:17.403 00:07:17.403 Allocating group tables: 0/64 done 00:07:17.403 Writing inode tables: 0/64 done 00:07:19.305 Creating journal (8192 blocks): done 00:07:20.500 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:07:20.500 00:07:20.500 21:22:43 -- common/autotest_common.sh@931 -- # return 0 00:07:20.500 21:22:43 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:20.500 21:22:43 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:20.500 21:22:43 -- target/filesystem.sh@25 -- # sync 00:07:20.500 21:22:43 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:20.500 21:22:43 -- target/filesystem.sh@27 -- # sync 00:07:20.500 21:22:43 -- target/filesystem.sh@29 -- # i=0 00:07:20.500 21:22:43 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:20.500 21:22:43 -- target/filesystem.sh@37 -- # kill -0 2707177 00:07:20.500 21:22:43 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:20.500 21:22:43 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:20.500 21:22:43 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:20.500 21:22:43 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:20.500 00:07:20.500 real 0m3.464s 00:07:20.500 user 0m0.024s 00:07:20.500 sys 0m0.087s 00:07:20.500 21:22:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:20.500 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:07:20.500 ************************************ 00:07:20.500 END TEST filesystem_ext4 00:07:20.500 ************************************ 00:07:20.760 21:22:43 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:20.760 21:22:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:20.760 21:22:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.760 21:22:43 -- common/autotest_common.sh@10 -- # set +x 00:07:20.760 ************************************ 00:07:20.760 START TEST filesystem_btrfs 00:07:20.760 ************************************ 00:07:20.760 21:22:43 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:20.760 21:22:43 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:20.760 21:22:43 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:20.760 21:22:43 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:20.760 21:22:43 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:20.760 21:22:43 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:20.760 21:22:43 -- common/autotest_common.sh@914 -- # local i=0 00:07:20.760 21:22:43 -- common/autotest_common.sh@915 -- # local force 00:07:20.760 21:22:43 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:20.760 21:22:43 -- common/autotest_common.sh@920 -- # force=-f 00:07:20.760 21:22:43 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:21.327 btrfs-progs v6.6.2 00:07:21.327 See https://btrfs.readthedocs.io for more information. 00:07:21.327 00:07:21.327 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:21.327 NOTE: several default settings have changed in version 5.15, please make sure 00:07:21.327 this does not affect your deployments: 00:07:21.327 - DUP for metadata (-m dup) 00:07:21.327 - enabled no-holes (-O no-holes) 00:07:21.327 - enabled free-space-tree (-R free-space-tree) 00:07:21.327 00:07:21.327 Label: (null) 00:07:21.327 UUID: 694ec578-5115-4b2e-9bc6-39e6ea03c725 00:07:21.327 Node size: 16384 00:07:21.327 Sector size: 4096 00:07:21.327 Filesystem size: 510.00MiB 00:07:21.327 Block group profiles: 00:07:21.327 Data: single 8.00MiB 00:07:21.328 Metadata: DUP 32.00MiB 00:07:21.328 System: DUP 8.00MiB 00:07:21.328 SSD detected: yes 00:07:21.328 Zoned device: no 00:07:21.328 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:21.328 Runtime features: free-space-tree 00:07:21.328 Checksum: crc32c 00:07:21.328 Number of devices: 1 00:07:21.328 Devices: 00:07:21.328 ID SIZE PATH 00:07:21.328 1 510.00MiB /dev/nvme0n1p1 00:07:21.328 00:07:21.328 21:22:44 -- common/autotest_common.sh@931 -- # return 0 00:07:21.328 21:22:44 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:22.264 21:22:45 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:22.264 21:22:45 -- target/filesystem.sh@25 -- # sync 00:07:22.264 21:22:45 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:22.264 21:22:45 -- target/filesystem.sh@27 -- # sync 00:07:22.264 21:22:45 -- target/filesystem.sh@29 -- # i=0 00:07:22.264 21:22:45 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:22.264 21:22:45 -- target/filesystem.sh@37 -- # kill -0 2707177 00:07:22.264 21:22:45 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:22.264 21:22:45 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:22.264 21:22:45 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:22.264 21:22:45 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:22.264 00:07:22.264 real 0m1.547s 00:07:22.264 user 0m0.026s 00:07:22.264 sys 0m0.148s 00:07:22.264 21:22:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:22.264 21:22:45 -- common/autotest_common.sh@10 -- # set +x 00:07:22.264 ************************************ 00:07:22.264 END TEST filesystem_btrfs 00:07:22.264 ************************************ 00:07:22.523 21:22:45 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:22.523 21:22:45 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:22.523 21:22:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.523 21:22:45 -- common/autotest_common.sh@10 -- # set +x 00:07:22.523 ************************************ 00:07:22.523 START TEST filesystem_xfs 00:07:22.523 ************************************ 00:07:22.523 21:22:45 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:22.523 21:22:45 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:22.523 21:22:45 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:22.523 21:22:45 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:22.523 21:22:45 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:22.523 21:22:45 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:22.523 21:22:45 -- common/autotest_common.sh@914 -- # local i=0 00:07:22.523 21:22:45 -- common/autotest_common.sh@915 -- # local force 00:07:22.523 21:22:45 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:22.523 21:22:45 -- common/autotest_common.sh@920 -- # force=-f 00:07:22.523 21:22:45 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:22.782 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:22.782 = sectsz=512 attr=2, projid32bit=1 00:07:22.782 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:22.782 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:22.782 data = bsize=4096 blocks=130560, imaxpct=25 00:07:22.782 = sunit=0 swidth=0 blks 00:07:22.782 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:22.782 log =internal log bsize=4096 blocks=16384, version=2 00:07:22.782 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:22.782 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:23.733 Discarding blocks...Done. 00:07:23.733 21:22:46 -- common/autotest_common.sh@931 -- # return 0 00:07:23.733 21:22:46 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:26.265 21:22:49 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:26.265 21:22:49 -- target/filesystem.sh@25 -- # sync 00:07:26.265 21:22:49 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:26.265 21:22:49 -- target/filesystem.sh@27 -- # sync 00:07:26.265 21:22:49 -- target/filesystem.sh@29 -- # i=0 00:07:26.265 21:22:49 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:26.265 21:22:49 -- target/filesystem.sh@37 -- # kill -0 2707177 00:07:26.265 21:22:49 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:26.265 21:22:49 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:26.265 21:22:49 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:26.265 21:22:49 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:26.523 00:07:26.523 real 0m3.814s 00:07:26.523 user 0m0.030s 00:07:26.523 sys 0m0.084s 00:07:26.523 21:22:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:26.523 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:26.523 ************************************ 00:07:26.523 END TEST filesystem_xfs 00:07:26.523 ************************************ 00:07:26.523 21:22:49 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:26.523 21:22:49 -- target/filesystem.sh@93 -- # sync 00:07:26.523 21:22:49 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:26.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:26.523 21:22:49 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:26.523 21:22:49 -- common/autotest_common.sh@1205 -- # local i=0 00:07:26.523 21:22:49 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:26.523 21:22:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:26.523 21:22:49 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:26.523 21:22:49 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:26.523 21:22:49 -- common/autotest_common.sh@1217 -- # return 0 00:07:26.523 21:22:49 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:26.523 21:22:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:26.523 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:26.523 21:22:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:26.523 21:22:49 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:26.523 21:22:49 -- target/filesystem.sh@101 -- # killprocess 2707177 00:07:26.523 21:22:49 -- common/autotest_common.sh@936 -- # '[' -z 2707177 ']' 00:07:26.523 21:22:49 -- common/autotest_common.sh@940 -- # kill -0 2707177 00:07:26.523 21:22:49 -- common/autotest_common.sh@941 -- # uname 00:07:26.523 21:22:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:26.783 21:22:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2707177 00:07:26.783 21:22:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:26.783 21:22:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:26.783 21:22:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2707177' 00:07:26.783 killing process with pid 2707177 00:07:26.783 21:22:49 -- common/autotest_common.sh@955 -- # kill 2707177 00:07:26.783 21:22:49 -- common/autotest_common.sh@960 -- # wait 2707177 00:07:27.042 21:22:49 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:27.042 00:07:27.042 real 0m16.414s 00:07:27.042 user 1m4.260s 00:07:27.042 sys 0m2.144s 00:07:27.042 21:22:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:27.042 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:27.042 ************************************ 00:07:27.042 END TEST nvmf_filesystem_no_in_capsule 00:07:27.042 ************************************ 00:07:27.042 21:22:49 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:27.042 21:22:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:27.042 21:22:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.042 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:27.301 ************************************ 00:07:27.301 START TEST nvmf_filesystem_in_capsule 00:07:27.301 ************************************ 00:07:27.301 21:22:49 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:07:27.301 21:22:49 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:27.301 21:22:49 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:27.301 21:22:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:27.301 21:22:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:27.301 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:27.301 21:22:49 -- nvmf/common.sh@470 -- # nvmfpid=2710180 00:07:27.301 21:22:49 -- nvmf/common.sh@471 -- # waitforlisten 2710180 00:07:27.301 21:22:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:27.301 21:22:49 -- common/autotest_common.sh@817 -- # '[' -z 2710180 ']' 00:07:27.301 21:22:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.301 21:22:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:27.301 21:22:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.301 21:22:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:27.302 21:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:27.302 [2024-04-24 21:22:50.046877] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:07:27.302 [2024-04-24 21:22:50.046927] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.302 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.302 [2024-04-24 21:22:50.122979] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.560 [2024-04-24 21:22:50.196623] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.560 [2024-04-24 21:22:50.196661] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.560 [2024-04-24 21:22:50.196671] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.560 [2024-04-24 21:22:50.196680] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.560 [2024-04-24 21:22:50.196687] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.560 [2024-04-24 21:22:50.196735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.560 [2024-04-24 21:22:50.196755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.560 [2024-04-24 21:22:50.196840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.560 [2024-04-24 21:22:50.196842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.129 21:22:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:28.129 21:22:50 -- common/autotest_common.sh@850 -- # return 0 00:07:28.129 21:22:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:28.129 21:22:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:28.129 21:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:28.129 21:22:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.129 21:22:50 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:28.129 21:22:50 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:28.129 21:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.129 21:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:28.129 [2024-04-24 21:22:50.904368] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.129 21:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.129 21:22:50 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:28.129 21:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.129 21:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:28.389 Malloc1 00:07:28.389 21:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.389 21:22:51 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:28.389 21:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.389 21:22:51 -- common/autotest_common.sh@10 -- # set +x 00:07:28.389 21:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.389 21:22:51 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:28.389 21:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.389 21:22:51 -- common/autotest_common.sh@10 -- # set +x 00:07:28.389 21:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.389 21:22:51 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.389 21:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.389 21:22:51 -- common/autotest_common.sh@10 -- # set +x 00:07:28.389 [2024-04-24 21:22:51.060249] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.389 21:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.389 21:22:51 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:28.389 21:22:51 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:28.389 21:22:51 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:28.389 21:22:51 -- common/autotest_common.sh@1366 -- # local bs 00:07:28.389 21:22:51 -- common/autotest_common.sh@1367 -- # local nb 00:07:28.389 21:22:51 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:28.389 21:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.389 21:22:51 -- common/autotest_common.sh@10 -- # set +x 00:07:28.389 21:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.389 21:22:51 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:28.389 { 00:07:28.389 "name": "Malloc1", 00:07:28.389 "aliases": [ 00:07:28.389 "e0763a6d-264d-48ca-9f0f-386bd7e20130" 00:07:28.389 ], 00:07:28.389 "product_name": "Malloc disk", 00:07:28.389 "block_size": 512, 00:07:28.389 "num_blocks": 1048576, 00:07:28.389 "uuid": "e0763a6d-264d-48ca-9f0f-386bd7e20130", 00:07:28.389 "assigned_rate_limits": { 00:07:28.389 "rw_ios_per_sec": 0, 00:07:28.389 "rw_mbytes_per_sec": 0, 00:07:28.389 "r_mbytes_per_sec": 0, 00:07:28.389 "w_mbytes_per_sec": 0 00:07:28.389 }, 00:07:28.389 "claimed": true, 00:07:28.389 "claim_type": "exclusive_write", 00:07:28.389 "zoned": false, 00:07:28.389 "supported_io_types": { 00:07:28.389 "read": true, 00:07:28.389 "write": true, 00:07:28.389 "unmap": true, 00:07:28.389 "write_zeroes": true, 00:07:28.389 "flush": true, 00:07:28.389 "reset": true, 00:07:28.389 "compare": false, 00:07:28.389 "compare_and_write": false, 00:07:28.389 "abort": true, 00:07:28.389 "nvme_admin": false, 00:07:28.389 "nvme_io": false 00:07:28.389 }, 00:07:28.389 "memory_domains": [ 00:07:28.389 { 00:07:28.389 "dma_device_id": "system", 00:07:28.389 "dma_device_type": 1 00:07:28.389 }, 00:07:28.389 { 00:07:28.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.389 "dma_device_type": 2 00:07:28.389 } 00:07:28.389 ], 00:07:28.389 "driver_specific": {} 00:07:28.389 } 00:07:28.389 ]' 00:07:28.389 21:22:51 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:28.389 21:22:51 -- common/autotest_common.sh@1369 -- # bs=512 00:07:28.389 21:22:51 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:28.389 21:22:51 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:28.389 21:22:51 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:28.389 21:22:51 -- common/autotest_common.sh@1374 -- # echo 512 00:07:28.389 21:22:51 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:28.389 21:22:51 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:29.767 21:22:52 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:29.767 21:22:52 -- common/autotest_common.sh@1184 -- # local i=0 00:07:29.767 21:22:52 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:29.767 21:22:52 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:29.767 21:22:52 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:31.673 21:22:54 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:31.673 21:22:54 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:31.933 21:22:54 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:31.933 21:22:54 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:31.933 21:22:54 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:31.933 21:22:54 -- common/autotest_common.sh@1194 -- # return 0 00:07:31.933 21:22:54 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:31.933 21:22:54 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:31.933 21:22:54 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:31.933 21:22:54 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:31.933 21:22:54 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:31.933 21:22:54 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:31.933 21:22:54 -- setup/common.sh@80 -- # echo 536870912 00:07:31.933 21:22:54 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:31.933 21:22:54 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:31.933 21:22:54 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:31.933 21:22:54 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:32.192 21:22:54 -- target/filesystem.sh@69 -- # partprobe 00:07:32.451 21:22:55 -- target/filesystem.sh@70 -- # sleep 1 00:07:33.389 21:22:56 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:33.389 21:22:56 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:33.389 21:22:56 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:33.389 21:22:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.389 21:22:56 -- common/autotest_common.sh@10 -- # set +x 00:07:33.649 ************************************ 00:07:33.649 START TEST filesystem_in_capsule_ext4 00:07:33.649 ************************************ 00:07:33.649 21:22:56 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:33.649 21:22:56 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:33.649 21:22:56 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:33.649 21:22:56 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:33.649 21:22:56 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:33.649 21:22:56 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:33.649 21:22:56 -- common/autotest_common.sh@914 -- # local i=0 00:07:33.649 21:22:56 -- common/autotest_common.sh@915 -- # local force 00:07:33.649 21:22:56 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:33.649 21:22:56 -- common/autotest_common.sh@918 -- # force=-F 00:07:33.649 21:22:56 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:33.649 mke2fs 1.46.5 (30-Dec-2021) 00:07:33.649 Discarding device blocks: 0/522240 done 00:07:33.649 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:33.649 Filesystem UUID: 3822e91e-c228-4ed1-aa57-9add7f5ea358 00:07:33.649 Superblock backups stored on blocks: 00:07:33.649 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:33.649 00:07:33.649 Allocating group tables: 0/64 done 00:07:33.649 Writing inode tables: 0/64 done 00:07:36.940 Creating journal (8192 blocks): done 00:07:36.940 Writing superblocks and filesystem accounting information: 0/64 done 00:07:36.940 00:07:36.940 21:22:59 -- common/autotest_common.sh@931 -- # return 0 00:07:36.940 21:22:59 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:37.508 21:23:00 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:37.508 21:23:00 -- target/filesystem.sh@25 -- # sync 00:07:37.508 21:23:00 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:37.508 21:23:00 -- target/filesystem.sh@27 -- # sync 00:07:37.508 21:23:00 -- target/filesystem.sh@29 -- # i=0 00:07:37.508 21:23:00 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:37.508 21:23:00 -- target/filesystem.sh@37 -- # kill -0 2710180 00:07:37.508 21:23:00 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:37.508 21:23:00 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:37.508 21:23:00 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:37.508 21:23:00 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:37.508 00:07:37.508 real 0m3.908s 00:07:37.508 user 0m0.031s 00:07:37.508 sys 0m0.082s 00:07:37.508 21:23:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:37.508 21:23:00 -- common/autotest_common.sh@10 -- # set +x 00:07:37.508 ************************************ 00:07:37.508 END TEST filesystem_in_capsule_ext4 00:07:37.508 ************************************ 00:07:37.508 21:23:00 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:37.508 21:23:00 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:37.508 21:23:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.508 21:23:00 -- common/autotest_common.sh@10 -- # set +x 00:07:37.770 ************************************ 00:07:37.770 START TEST filesystem_in_capsule_btrfs 00:07:37.770 ************************************ 00:07:37.770 21:23:00 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:37.770 21:23:00 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:37.770 21:23:00 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:37.770 21:23:00 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:37.770 21:23:00 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:37.771 21:23:00 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:37.771 21:23:00 -- common/autotest_common.sh@914 -- # local i=0 00:07:37.771 21:23:00 -- common/autotest_common.sh@915 -- # local force 00:07:37.771 21:23:00 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:37.771 21:23:00 -- common/autotest_common.sh@920 -- # force=-f 00:07:37.771 21:23:00 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:38.030 btrfs-progs v6.6.2 00:07:38.030 See https://btrfs.readthedocs.io for more information. 00:07:38.030 00:07:38.030 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:38.030 NOTE: several default settings have changed in version 5.15, please make sure 00:07:38.030 this does not affect your deployments: 00:07:38.030 - DUP for metadata (-m dup) 00:07:38.030 - enabled no-holes (-O no-holes) 00:07:38.030 - enabled free-space-tree (-R free-space-tree) 00:07:38.030 00:07:38.030 Label: (null) 00:07:38.030 UUID: 19c3dbca-8d32-46c4-bbc5-dc1a105432a8 00:07:38.030 Node size: 16384 00:07:38.030 Sector size: 4096 00:07:38.030 Filesystem size: 510.00MiB 00:07:38.030 Block group profiles: 00:07:38.030 Data: single 8.00MiB 00:07:38.030 Metadata: DUP 32.00MiB 00:07:38.030 System: DUP 8.00MiB 00:07:38.030 SSD detected: yes 00:07:38.030 Zoned device: no 00:07:38.030 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:38.030 Runtime features: free-space-tree 00:07:38.030 Checksum: crc32c 00:07:38.030 Number of devices: 1 00:07:38.030 Devices: 00:07:38.030 ID SIZE PATH 00:07:38.030 1 510.00MiB /dev/nvme0n1p1 00:07:38.030 00:07:38.030 21:23:00 -- common/autotest_common.sh@931 -- # return 0 00:07:38.030 21:23:00 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:38.598 21:23:01 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:38.598 21:23:01 -- target/filesystem.sh@25 -- # sync 00:07:38.598 21:23:01 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:38.598 21:23:01 -- target/filesystem.sh@27 -- # sync 00:07:38.598 21:23:01 -- target/filesystem.sh@29 -- # i=0 00:07:38.598 21:23:01 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:38.598 21:23:01 -- target/filesystem.sh@37 -- # kill -0 2710180 00:07:38.598 21:23:01 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:38.598 21:23:01 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:38.598 21:23:01 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:38.598 21:23:01 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:38.598 00:07:38.598 real 0m0.884s 00:07:38.598 user 0m0.036s 00:07:38.598 sys 0m0.138s 00:07:38.598 21:23:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:38.598 21:23:01 -- common/autotest_common.sh@10 -- # set +x 00:07:38.598 ************************************ 00:07:38.598 END TEST filesystem_in_capsule_btrfs 00:07:38.598 ************************************ 00:07:38.598 21:23:01 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:38.598 21:23:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:38.598 21:23:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.598 21:23:01 -- common/autotest_common.sh@10 -- # set +x 00:07:38.858 ************************************ 00:07:38.858 START TEST filesystem_in_capsule_xfs 00:07:38.858 ************************************ 00:07:38.858 21:23:01 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:38.858 21:23:01 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:38.858 21:23:01 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:38.858 21:23:01 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:38.858 21:23:01 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:38.858 21:23:01 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:38.858 21:23:01 -- common/autotest_common.sh@914 -- # local i=0 00:07:38.858 21:23:01 -- common/autotest_common.sh@915 -- # local force 00:07:38.858 21:23:01 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:38.858 21:23:01 -- common/autotest_common.sh@920 -- # force=-f 00:07:38.858 21:23:01 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:38.858 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:38.858 = sectsz=512 attr=2, projid32bit=1 00:07:38.858 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:38.858 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:38.858 data = bsize=4096 blocks=130560, imaxpct=25 00:07:38.859 = sunit=0 swidth=0 blks 00:07:38.859 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:38.859 log =internal log bsize=4096 blocks=16384, version=2 00:07:38.859 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:38.859 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:40.236 Discarding blocks...Done. 00:07:40.236 21:23:02 -- common/autotest_common.sh@931 -- # return 0 00:07:40.236 21:23:02 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.770 21:23:05 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.770 21:23:05 -- target/filesystem.sh@25 -- # sync 00:07:42.770 21:23:05 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.770 21:23:05 -- target/filesystem.sh@27 -- # sync 00:07:42.770 21:23:05 -- target/filesystem.sh@29 -- # i=0 00:07:42.770 21:23:05 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.770 21:23:05 -- target/filesystem.sh@37 -- # kill -0 2710180 00:07:42.770 21:23:05 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.770 21:23:05 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.770 21:23:05 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.770 21:23:05 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.770 00:07:42.770 real 0m3.680s 00:07:42.770 user 0m0.033s 00:07:42.770 sys 0m0.083s 00:07:42.770 21:23:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:42.770 21:23:05 -- common/autotest_common.sh@10 -- # set +x 00:07:42.770 ************************************ 00:07:42.770 END TEST filesystem_in_capsule_xfs 00:07:42.770 ************************************ 00:07:42.770 21:23:05 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:42.770 21:23:05 -- target/filesystem.sh@93 -- # sync 00:07:42.770 21:23:05 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:42.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.770 21:23:05 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:42.770 21:23:05 -- common/autotest_common.sh@1205 -- # local i=0 00:07:42.770 21:23:05 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:42.770 21:23:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.770 21:23:05 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:42.770 21:23:05 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.770 21:23:05 -- common/autotest_common.sh@1217 -- # return 0 00:07:42.770 21:23:05 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:42.770 21:23:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:42.770 21:23:05 -- common/autotest_common.sh@10 -- # set +x 00:07:42.770 21:23:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:42.770 21:23:05 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:42.770 21:23:05 -- target/filesystem.sh@101 -- # killprocess 2710180 00:07:42.770 21:23:05 -- common/autotest_common.sh@936 -- # '[' -z 2710180 ']' 00:07:42.770 21:23:05 -- common/autotest_common.sh@940 -- # kill -0 2710180 00:07:42.770 21:23:05 -- common/autotest_common.sh@941 -- # uname 00:07:42.770 21:23:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:42.770 21:23:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2710180 00:07:42.770 21:23:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:42.770 21:23:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:42.770 21:23:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2710180' 00:07:42.770 killing process with pid 2710180 00:07:42.770 21:23:05 -- common/autotest_common.sh@955 -- # kill 2710180 00:07:42.770 21:23:05 -- common/autotest_common.sh@960 -- # wait 2710180 00:07:43.337 21:23:05 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:43.337 00:07:43.337 real 0m15.996s 00:07:43.337 user 1m2.508s 00:07:43.337 sys 0m2.193s 00:07:43.337 21:23:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:43.337 21:23:05 -- common/autotest_common.sh@10 -- # set +x 00:07:43.337 ************************************ 00:07:43.337 END TEST nvmf_filesystem_in_capsule 00:07:43.337 ************************************ 00:07:43.337 21:23:06 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:43.337 21:23:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:43.337 21:23:06 -- nvmf/common.sh@117 -- # sync 00:07:43.337 21:23:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:43.337 21:23:06 -- nvmf/common.sh@120 -- # set +e 00:07:43.337 21:23:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:43.337 21:23:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:43.337 rmmod nvme_tcp 00:07:43.337 rmmod nvme_fabrics 00:07:43.337 rmmod nvme_keyring 00:07:43.337 21:23:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:43.337 21:23:06 -- nvmf/common.sh@124 -- # set -e 00:07:43.337 21:23:06 -- nvmf/common.sh@125 -- # return 0 00:07:43.337 21:23:06 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:07:43.337 21:23:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:43.337 21:23:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:43.337 21:23:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:43.337 21:23:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:43.337 21:23:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:43.337 21:23:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.337 21:23:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.337 21:23:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.872 21:23:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:45.872 00:07:45.872 real 0m42.018s 00:07:45.872 user 2m8.823s 00:07:45.872 sys 0m9.865s 00:07:45.872 21:23:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:45.872 21:23:08 -- common/autotest_common.sh@10 -- # set +x 00:07:45.872 ************************************ 00:07:45.872 END TEST nvmf_filesystem 00:07:45.872 ************************************ 00:07:45.872 21:23:08 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:45.872 21:23:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:45.872 21:23:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.872 21:23:08 -- common/autotest_common.sh@10 -- # set +x 00:07:45.872 ************************************ 00:07:45.872 START TEST nvmf_discovery 00:07:45.872 ************************************ 00:07:45.872 21:23:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:45.872 * Looking for test storage... 00:07:45.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.872 21:23:08 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.872 21:23:08 -- nvmf/common.sh@7 -- # uname -s 00:07:45.872 21:23:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.872 21:23:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.872 21:23:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.872 21:23:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.872 21:23:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.872 21:23:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.872 21:23:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.872 21:23:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.872 21:23:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.872 21:23:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.872 21:23:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:45.872 21:23:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:45.872 21:23:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.872 21:23:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.872 21:23:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.872 21:23:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.872 21:23:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.872 21:23:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.872 21:23:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.872 21:23:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.872 21:23:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.872 21:23:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.872 21:23:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.872 21:23:08 -- paths/export.sh@5 -- # export PATH 00:07:45.872 21:23:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.872 21:23:08 -- nvmf/common.sh@47 -- # : 0 00:07:45.872 21:23:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.872 21:23:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.872 21:23:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.872 21:23:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.872 21:23:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.872 21:23:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.872 21:23:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.872 21:23:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.872 21:23:08 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:45.872 21:23:08 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:45.872 21:23:08 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:45.872 21:23:08 -- target/discovery.sh@15 -- # hash nvme 00:07:45.872 21:23:08 -- target/discovery.sh@20 -- # nvmftestinit 00:07:45.872 21:23:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:45.872 21:23:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.872 21:23:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:45.872 21:23:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:45.872 21:23:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:45.872 21:23:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.872 21:23:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.872 21:23:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.872 21:23:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:45.872 21:23:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:45.872 21:23:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:45.872 21:23:08 -- common/autotest_common.sh@10 -- # set +x 00:07:52.440 21:23:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:52.440 21:23:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:52.440 21:23:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:52.440 21:23:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:52.440 21:23:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:52.440 21:23:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:52.440 21:23:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:52.440 21:23:14 -- nvmf/common.sh@295 -- # net_devs=() 00:07:52.440 21:23:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:52.440 21:23:14 -- nvmf/common.sh@296 -- # e810=() 00:07:52.440 21:23:14 -- nvmf/common.sh@296 -- # local -ga e810 00:07:52.440 21:23:14 -- nvmf/common.sh@297 -- # x722=() 00:07:52.440 21:23:14 -- nvmf/common.sh@297 -- # local -ga x722 00:07:52.440 21:23:14 -- nvmf/common.sh@298 -- # mlx=() 00:07:52.440 21:23:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:52.440 21:23:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.440 21:23:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.441 21:23:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.441 21:23:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.441 21:23:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.441 21:23:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.441 21:23:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.441 21:23:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.441 21:23:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.441 21:23:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.441 21:23:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.441 21:23:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:52.441 21:23:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:52.441 21:23:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:52.441 21:23:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:52.441 21:23:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:52.441 21:23:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:52.441 21:23:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.441 21:23:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:52.441 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:52.441 21:23:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.441 21:23:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.441 21:23:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.441 21:23:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.441 21:23:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.441 21:23:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.441 21:23:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:52.441 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:52.441 21:23:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.441 21:23:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.441 21:23:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.441 21:23:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.441 21:23:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.441 21:23:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:52.441 21:23:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:52.441 21:23:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:52.441 21:23:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.441 21:23:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.441 21:23:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:52.441 21:23:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.441 21:23:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:52.441 Found net devices under 0000:af:00.0: cvl_0_0 00:07:52.441 21:23:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.441 21:23:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.441 21:23:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.441 21:23:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:52.441 21:23:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.441 21:23:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:52.441 Found net devices under 0000:af:00.1: cvl_0_1 00:07:52.441 21:23:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.441 21:23:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:52.441 21:23:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:52.441 21:23:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:52.441 21:23:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:52.441 21:23:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:52.441 21:23:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.441 21:23:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.441 21:23:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.441 21:23:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:52.441 21:23:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.441 21:23:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.441 21:23:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:52.441 21:23:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.441 21:23:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.441 21:23:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:52.441 21:23:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:52.441 21:23:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.441 21:23:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.441 21:23:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.441 21:23:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.441 21:23:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:52.441 21:23:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.441 21:23:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.441 21:23:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.441 21:23:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:52.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:07:52.441 00:07:52.441 --- 10.0.0.2 ping statistics --- 00:07:52.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.441 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:07:52.441 21:23:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:07:52.441 00:07:52.441 --- 10.0.0.1 ping statistics --- 00:07:52.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.441 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:07:52.441 21:23:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.441 21:23:14 -- nvmf/common.sh@411 -- # return 0 00:07:52.441 21:23:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:52.441 21:23:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.441 21:23:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:52.441 21:23:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:52.441 21:23:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.441 21:23:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:52.441 21:23:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:52.441 21:23:14 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:52.441 21:23:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:52.441 21:23:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:52.441 21:23:14 -- common/autotest_common.sh@10 -- # set +x 00:07:52.441 21:23:14 -- nvmf/common.sh@470 -- # nvmfpid=2716709 00:07:52.441 21:23:14 -- nvmf/common.sh@471 -- # waitforlisten 2716709 00:07:52.441 21:23:14 -- common/autotest_common.sh@817 -- # '[' -z 2716709 ']' 00:07:52.441 21:23:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.441 21:23:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:52.441 21:23:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.441 21:23:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:52.441 21:23:14 -- common/autotest_common.sh@10 -- # set +x 00:07:52.441 21:23:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:52.441 [2024-04-24 21:23:14.806288] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:07:52.441 [2024-04-24 21:23:14.806332] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.441 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.441 [2024-04-24 21:23:14.880749] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.441 [2024-04-24 21:23:14.952936] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.441 [2024-04-24 21:23:14.952974] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.441 [2024-04-24 21:23:14.952984] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.441 [2024-04-24 21:23:14.952993] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.441 [2024-04-24 21:23:14.953000] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.441 [2024-04-24 21:23:14.953049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.441 [2024-04-24 21:23:14.953067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.441 [2024-04-24 21:23:14.953177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.441 [2024-04-24 21:23:14.953179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.009 21:23:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:53.009 21:23:15 -- common/autotest_common.sh@850 -- # return 0 00:07:53.009 21:23:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:53.009 21:23:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 21:23:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.010 21:23:15 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.010 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 [2024-04-24 21:23:15.658163] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.010 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.010 21:23:15 -- target/discovery.sh@26 -- # seq 1 4 00:07:53.010 21:23:15 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:53.010 21:23:15 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:53.010 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 Null1 00:07:53.010 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.010 21:23:15 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:53.010 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.010 21:23:15 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:53.010 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.010 21:23:15 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.010 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 [2024-04-24 21:23:15.710480] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.010 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.010 21:23:15 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:53.010 21:23:15 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:53.010 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 Null2 00:07:53.010 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.010 21:23:15 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:53.010 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.010 21:23:15 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:53.010 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.010 21:23:15 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:53.010 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.010 21:23:15 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:53.010 21:23:15 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:53.010 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 Null3 00:07:53.010 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.010 21:23:15 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:53.010 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.010 21:23:15 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:53.010 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.010 21:23:15 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:53.010 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.010 21:23:15 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:53.010 21:23:15 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:53.010 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 Null4 00:07:53.010 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.010 21:23:15 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:53.010 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.010 21:23:15 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:53.010 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.010 21:23:15 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:53.010 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.010 21:23:15 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.010 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.010 21:23:15 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:53.010 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.010 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.010 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.010 21:23:15 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:07:53.269 00:07:53.269 Discovery Log Number of Records 6, Generation counter 6 00:07:53.269 =====Discovery Log Entry 0====== 00:07:53.269 trtype: tcp 00:07:53.269 adrfam: ipv4 00:07:53.269 subtype: current discovery subsystem 00:07:53.269 treq: not required 00:07:53.269 portid: 0 00:07:53.269 trsvcid: 4420 00:07:53.269 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:53.269 traddr: 10.0.0.2 00:07:53.269 eflags: explicit discovery connections, duplicate discovery information 00:07:53.269 sectype: none 00:07:53.269 =====Discovery Log Entry 1====== 00:07:53.269 trtype: tcp 00:07:53.269 adrfam: ipv4 00:07:53.269 subtype: nvme subsystem 00:07:53.269 treq: not required 00:07:53.269 portid: 0 00:07:53.269 trsvcid: 4420 00:07:53.269 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:53.269 traddr: 10.0.0.2 00:07:53.269 eflags: none 00:07:53.269 sectype: none 00:07:53.269 =====Discovery Log Entry 2====== 00:07:53.269 trtype: tcp 00:07:53.269 adrfam: ipv4 00:07:53.269 subtype: nvme subsystem 00:07:53.269 treq: not required 00:07:53.269 portid: 0 00:07:53.269 trsvcid: 4420 00:07:53.269 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:53.269 traddr: 10.0.0.2 00:07:53.269 eflags: none 00:07:53.269 sectype: none 00:07:53.269 =====Discovery Log Entry 3====== 00:07:53.269 trtype: tcp 00:07:53.269 adrfam: ipv4 00:07:53.269 subtype: nvme subsystem 00:07:53.269 treq: not required 00:07:53.269 portid: 0 00:07:53.269 trsvcid: 4420 00:07:53.269 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:53.269 traddr: 10.0.0.2 00:07:53.269 eflags: none 00:07:53.269 sectype: none 00:07:53.269 =====Discovery Log Entry 4====== 00:07:53.269 trtype: tcp 00:07:53.269 adrfam: ipv4 00:07:53.269 subtype: nvme subsystem 00:07:53.269 treq: not required 00:07:53.269 portid: 0 00:07:53.269 trsvcid: 4420 00:07:53.269 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:53.270 traddr: 10.0.0.2 00:07:53.270 eflags: none 00:07:53.270 sectype: none 00:07:53.270 =====Discovery Log Entry 5====== 00:07:53.270 trtype: tcp 00:07:53.270 adrfam: ipv4 00:07:53.270 subtype: discovery subsystem referral 00:07:53.270 treq: not required 00:07:53.270 portid: 0 00:07:53.270 trsvcid: 4430 00:07:53.270 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:53.270 traddr: 10.0.0.2 00:07:53.270 eflags: none 00:07:53.270 sectype: none 00:07:53.270 21:23:15 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:53.270 Perform nvmf subsystem discovery via RPC 00:07:53.270 21:23:15 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:53.270 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.270 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.270 [2024-04-24 21:23:15.918977] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:53.270 [ 00:07:53.270 { 00:07:53.270 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:53.270 "subtype": "Discovery", 00:07:53.270 "listen_addresses": [ 00:07:53.270 { 00:07:53.270 "transport": "TCP", 00:07:53.270 "trtype": "TCP", 00:07:53.270 "adrfam": "IPv4", 00:07:53.270 "traddr": "10.0.0.2", 00:07:53.270 "trsvcid": "4420" 00:07:53.270 } 00:07:53.270 ], 00:07:53.270 "allow_any_host": true, 00:07:53.270 "hosts": [] 00:07:53.270 }, 00:07:53.270 { 00:07:53.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.270 "subtype": "NVMe", 00:07:53.270 "listen_addresses": [ 00:07:53.270 { 00:07:53.270 "transport": "TCP", 00:07:53.270 "trtype": "TCP", 00:07:53.270 "adrfam": "IPv4", 00:07:53.270 "traddr": "10.0.0.2", 00:07:53.270 "trsvcid": "4420" 00:07:53.270 } 00:07:53.270 ], 00:07:53.270 "allow_any_host": true, 00:07:53.270 "hosts": [], 00:07:53.270 "serial_number": "SPDK00000000000001", 00:07:53.270 "model_number": "SPDK bdev Controller", 00:07:53.270 "max_namespaces": 32, 00:07:53.270 "min_cntlid": 1, 00:07:53.270 "max_cntlid": 65519, 00:07:53.270 "namespaces": [ 00:07:53.270 { 00:07:53.270 "nsid": 1, 00:07:53.270 "bdev_name": "Null1", 00:07:53.270 "name": "Null1", 00:07:53.270 "nguid": "E0B40481E21A44CAB9C860717FC7C38E", 00:07:53.270 "uuid": "e0b40481-e21a-44ca-b9c8-60717fc7c38e" 00:07:53.270 } 00:07:53.270 ] 00:07:53.270 }, 00:07:53.270 { 00:07:53.270 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:53.270 "subtype": "NVMe", 00:07:53.270 "listen_addresses": [ 00:07:53.270 { 00:07:53.270 "transport": "TCP", 00:07:53.270 "trtype": "TCP", 00:07:53.270 "adrfam": "IPv4", 00:07:53.270 "traddr": "10.0.0.2", 00:07:53.270 "trsvcid": "4420" 00:07:53.270 } 00:07:53.270 ], 00:07:53.270 "allow_any_host": true, 00:07:53.270 "hosts": [], 00:07:53.270 "serial_number": "SPDK00000000000002", 00:07:53.270 "model_number": "SPDK bdev Controller", 00:07:53.270 "max_namespaces": 32, 00:07:53.270 "min_cntlid": 1, 00:07:53.270 "max_cntlid": 65519, 00:07:53.270 "namespaces": [ 00:07:53.270 { 00:07:53.270 "nsid": 1, 00:07:53.270 "bdev_name": "Null2", 00:07:53.270 "name": "Null2", 00:07:53.270 "nguid": "01DACE6256424D21A0630374BA2F8B8D", 00:07:53.270 "uuid": "01dace62-5642-4d21-a063-0374ba2f8b8d" 00:07:53.270 } 00:07:53.270 ] 00:07:53.270 }, 00:07:53.270 { 00:07:53.270 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:53.270 "subtype": "NVMe", 00:07:53.270 "listen_addresses": [ 00:07:53.270 { 00:07:53.270 "transport": "TCP", 00:07:53.270 "trtype": "TCP", 00:07:53.270 "adrfam": "IPv4", 00:07:53.270 "traddr": "10.0.0.2", 00:07:53.270 "trsvcid": "4420" 00:07:53.270 } 00:07:53.270 ], 00:07:53.270 "allow_any_host": true, 00:07:53.270 "hosts": [], 00:07:53.270 "serial_number": "SPDK00000000000003", 00:07:53.270 "model_number": "SPDK bdev Controller", 00:07:53.270 "max_namespaces": 32, 00:07:53.270 "min_cntlid": 1, 00:07:53.270 "max_cntlid": 65519, 00:07:53.270 "namespaces": [ 00:07:53.270 { 00:07:53.270 "nsid": 1, 00:07:53.270 "bdev_name": "Null3", 00:07:53.270 "name": "Null3", 00:07:53.270 "nguid": "EC573B51CD97468EA39760F060229663", 00:07:53.270 "uuid": "ec573b51-cd97-468e-a397-60f060229663" 00:07:53.270 } 00:07:53.270 ] 00:07:53.270 }, 00:07:53.270 { 00:07:53.270 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:53.270 "subtype": "NVMe", 00:07:53.270 "listen_addresses": [ 00:07:53.270 { 00:07:53.270 "transport": "TCP", 00:07:53.270 "trtype": "TCP", 00:07:53.270 "adrfam": "IPv4", 00:07:53.270 "traddr": "10.0.0.2", 00:07:53.270 "trsvcid": "4420" 00:07:53.270 } 00:07:53.270 ], 00:07:53.270 "allow_any_host": true, 00:07:53.270 "hosts": [], 00:07:53.270 "serial_number": "SPDK00000000000004", 00:07:53.270 "model_number": "SPDK bdev Controller", 00:07:53.270 "max_namespaces": 32, 00:07:53.270 "min_cntlid": 1, 00:07:53.270 "max_cntlid": 65519, 00:07:53.270 "namespaces": [ 00:07:53.270 { 00:07:53.270 "nsid": 1, 00:07:53.270 "bdev_name": "Null4", 00:07:53.270 "name": "Null4", 00:07:53.270 "nguid": "0C1CB1B823684E709A1B9E8E9C73EBFE", 00:07:53.270 "uuid": "0c1cb1b8-2368-4e70-9a1b-9e8e9c73ebfe" 00:07:53.270 } 00:07:53.270 ] 00:07:53.270 } 00:07:53.270 ] 00:07:53.270 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.270 21:23:15 -- target/discovery.sh@42 -- # seq 1 4 00:07:53.270 21:23:15 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:53.270 21:23:15 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:53.270 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.270 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.270 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.270 21:23:15 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:53.270 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.270 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.270 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.270 21:23:15 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:53.270 21:23:15 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:53.270 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.270 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.270 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.270 21:23:15 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:53.270 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.270 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.270 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.270 21:23:15 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:53.270 21:23:15 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:53.270 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.270 21:23:15 -- common/autotest_common.sh@10 -- # set +x 00:07:53.270 21:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.270 21:23:15 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:53.270 21:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.270 21:23:16 -- common/autotest_common.sh@10 -- # set +x 00:07:53.270 21:23:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.270 21:23:16 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:53.270 21:23:16 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:53.270 21:23:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.270 21:23:16 -- common/autotest_common.sh@10 -- # set +x 00:07:53.270 21:23:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.270 21:23:16 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:53.270 21:23:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.270 21:23:16 -- common/autotest_common.sh@10 -- # set +x 00:07:53.270 21:23:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.270 21:23:16 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:53.270 21:23:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.270 21:23:16 -- common/autotest_common.sh@10 -- # set +x 00:07:53.270 21:23:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.270 21:23:16 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:53.270 21:23:16 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:53.270 21:23:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.270 21:23:16 -- common/autotest_common.sh@10 -- # set +x 00:07:53.270 21:23:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.270 21:23:16 -- target/discovery.sh@49 -- # check_bdevs= 00:07:53.270 21:23:16 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:53.270 21:23:16 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:53.270 21:23:16 -- target/discovery.sh@57 -- # nvmftestfini 00:07:53.270 21:23:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:53.270 21:23:16 -- nvmf/common.sh@117 -- # sync 00:07:53.270 21:23:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:53.270 21:23:16 -- nvmf/common.sh@120 -- # set +e 00:07:53.270 21:23:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:53.270 21:23:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:53.270 rmmod nvme_tcp 00:07:53.270 rmmod nvme_fabrics 00:07:53.270 rmmod nvme_keyring 00:07:53.270 21:23:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:53.270 21:23:16 -- nvmf/common.sh@124 -- # set -e 00:07:53.270 21:23:16 -- nvmf/common.sh@125 -- # return 0 00:07:53.270 21:23:16 -- nvmf/common.sh@478 -- # '[' -n 2716709 ']' 00:07:53.270 21:23:16 -- nvmf/common.sh@479 -- # killprocess 2716709 00:07:53.270 21:23:16 -- common/autotest_common.sh@936 -- # '[' -z 2716709 ']' 00:07:53.270 21:23:16 -- common/autotest_common.sh@940 -- # kill -0 2716709 00:07:53.270 21:23:16 -- common/autotest_common.sh@941 -- # uname 00:07:53.270 21:23:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:53.270 21:23:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2716709 00:07:53.529 21:23:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:53.529 21:23:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:53.529 21:23:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2716709' 00:07:53.529 killing process with pid 2716709 00:07:53.529 21:23:16 -- common/autotest_common.sh@955 -- # kill 2716709 00:07:53.529 [2024-04-24 21:23:16.193982] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:53.529 21:23:16 -- common/autotest_common.sh@960 -- # wait 2716709 00:07:53.529 21:23:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:53.529 21:23:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:53.529 21:23:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:53.529 21:23:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:53.529 21:23:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:53.529 21:23:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.529 21:23:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.529 21:23:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.102 21:23:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:56.102 00:07:56.102 real 0m10.110s 00:07:56.102 user 0m7.403s 00:07:56.102 sys 0m5.159s 00:07:56.102 21:23:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:56.102 21:23:18 -- common/autotest_common.sh@10 -- # set +x 00:07:56.102 ************************************ 00:07:56.102 END TEST nvmf_discovery 00:07:56.102 ************************************ 00:07:56.102 21:23:18 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:56.102 21:23:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:56.102 21:23:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.102 21:23:18 -- common/autotest_common.sh@10 -- # set +x 00:07:56.102 ************************************ 00:07:56.102 START TEST nvmf_referrals 00:07:56.102 ************************************ 00:07:56.102 21:23:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:56.102 * Looking for test storage... 00:07:56.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.102 21:23:18 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.102 21:23:18 -- nvmf/common.sh@7 -- # uname -s 00:07:56.102 21:23:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.102 21:23:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.102 21:23:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.102 21:23:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.102 21:23:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.102 21:23:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.102 21:23:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.102 21:23:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.102 21:23:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.102 21:23:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.102 21:23:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:56.102 21:23:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:56.102 21:23:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.102 21:23:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.102 21:23:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.102 21:23:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.102 21:23:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.102 21:23:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.102 21:23:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.102 21:23:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.102 21:23:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.102 21:23:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.102 21:23:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.102 21:23:18 -- paths/export.sh@5 -- # export PATH 00:07:56.102 21:23:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.102 21:23:18 -- nvmf/common.sh@47 -- # : 0 00:07:56.102 21:23:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:56.102 21:23:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:56.102 21:23:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.102 21:23:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.102 21:23:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.102 21:23:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:56.102 21:23:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:56.102 21:23:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:56.102 21:23:18 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:56.102 21:23:18 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:56.102 21:23:18 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:56.102 21:23:18 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:56.102 21:23:18 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:56.102 21:23:18 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:56.102 21:23:18 -- target/referrals.sh@37 -- # nvmftestinit 00:07:56.102 21:23:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:56.102 21:23:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.102 21:23:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:56.102 21:23:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:56.102 21:23:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:56.102 21:23:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.102 21:23:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.102 21:23:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.102 21:23:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:56.102 21:23:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:56.102 21:23:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:56.102 21:23:18 -- common/autotest_common.sh@10 -- # set +x 00:08:02.670 21:23:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:02.670 21:23:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:02.670 21:23:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:02.670 21:23:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:02.670 21:23:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:02.670 21:23:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:02.670 21:23:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:02.670 21:23:25 -- nvmf/common.sh@295 -- # net_devs=() 00:08:02.670 21:23:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:02.670 21:23:25 -- nvmf/common.sh@296 -- # e810=() 00:08:02.670 21:23:25 -- nvmf/common.sh@296 -- # local -ga e810 00:08:02.670 21:23:25 -- nvmf/common.sh@297 -- # x722=() 00:08:02.670 21:23:25 -- nvmf/common.sh@297 -- # local -ga x722 00:08:02.670 21:23:25 -- nvmf/common.sh@298 -- # mlx=() 00:08:02.670 21:23:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:02.670 21:23:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.670 21:23:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.670 21:23:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.670 21:23:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.670 21:23:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.670 21:23:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.670 21:23:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.670 21:23:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.670 21:23:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.670 21:23:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.670 21:23:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.670 21:23:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:02.670 21:23:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:02.670 21:23:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:02.670 21:23:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:02.670 21:23:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:02.670 21:23:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:02.670 21:23:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.670 21:23:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:02.670 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:02.670 21:23:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.670 21:23:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.670 21:23:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.670 21:23:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.670 21:23:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.670 21:23:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.670 21:23:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:02.670 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:02.670 21:23:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.670 21:23:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.670 21:23:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.670 21:23:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.670 21:23:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.670 21:23:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:02.670 21:23:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:02.670 21:23:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:02.670 21:23:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.670 21:23:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.670 21:23:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:02.670 21:23:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.670 21:23:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:02.670 Found net devices under 0000:af:00.0: cvl_0_0 00:08:02.670 21:23:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.670 21:23:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.670 21:23:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.671 21:23:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:02.671 21:23:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.671 21:23:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:02.671 Found net devices under 0000:af:00.1: cvl_0_1 00:08:02.671 21:23:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.671 21:23:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:02.671 21:23:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:02.671 21:23:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:02.671 21:23:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:02.671 21:23:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:02.671 21:23:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.671 21:23:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.671 21:23:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.671 21:23:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:02.671 21:23:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.671 21:23:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.671 21:23:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:02.671 21:23:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.671 21:23:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.671 21:23:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:02.671 21:23:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:02.671 21:23:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.671 21:23:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.671 21:23:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.671 21:23:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.671 21:23:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:02.671 21:23:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.930 21:23:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.930 21:23:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.930 21:23:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:02.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:08:02.930 00:08:02.930 --- 10.0.0.2 ping statistics --- 00:08:02.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.930 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:08:02.930 21:23:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:08:02.930 00:08:02.930 --- 10.0.0.1 ping statistics --- 00:08:02.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.930 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:08:02.930 21:23:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.930 21:23:25 -- nvmf/common.sh@411 -- # return 0 00:08:02.930 21:23:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:02.930 21:23:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.930 21:23:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:02.930 21:23:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:02.930 21:23:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.930 21:23:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:02.930 21:23:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:02.930 21:23:25 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:02.930 21:23:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:02.930 21:23:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:02.930 21:23:25 -- common/autotest_common.sh@10 -- # set +x 00:08:02.930 21:23:25 -- nvmf/common.sh@470 -- # nvmfpid=2720713 00:08:02.930 21:23:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:02.930 21:23:25 -- nvmf/common.sh@471 -- # waitforlisten 2720713 00:08:02.930 21:23:25 -- common/autotest_common.sh@817 -- # '[' -z 2720713 ']' 00:08:02.930 21:23:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.930 21:23:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:02.930 21:23:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.930 21:23:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:02.930 21:23:25 -- common/autotest_common.sh@10 -- # set +x 00:08:02.930 [2024-04-24 21:23:25.763850] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:08:02.930 [2024-04-24 21:23:25.763897] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.930 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.190 [2024-04-24 21:23:25.840557] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.190 [2024-04-24 21:23:25.909092] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.190 [2024-04-24 21:23:25.909136] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.190 [2024-04-24 21:23:25.909145] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.190 [2024-04-24 21:23:25.909153] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.190 [2024-04-24 21:23:25.909177] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.190 [2024-04-24 21:23:25.909228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.190 [2024-04-24 21:23:25.909322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.190 [2024-04-24 21:23:25.909412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.190 [2024-04-24 21:23:25.909414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.756 21:23:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:03.756 21:23:26 -- common/autotest_common.sh@850 -- # return 0 00:08:03.756 21:23:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:03.756 21:23:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:03.756 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:08:03.756 21:23:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.757 21:23:26 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:03.757 21:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:03.757 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:08:03.757 [2024-04-24 21:23:26.615289] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.757 21:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:03.757 21:23:26 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:03.757 21:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:03.757 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:08:03.757 [2024-04-24 21:23:26.631528] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:03.757 21:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:03.757 21:23:26 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:03.757 21:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:03.757 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:08:03.757 21:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:03.757 21:23:26 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:03.757 21:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:03.757 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:08:04.016 21:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.016 21:23:26 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:04.016 21:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.016 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:08:04.016 21:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.016 21:23:26 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.016 21:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.016 21:23:26 -- target/referrals.sh@48 -- # jq length 00:08:04.016 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:08:04.016 21:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.016 21:23:26 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:04.016 21:23:26 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:04.016 21:23:26 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:04.016 21:23:26 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.016 21:23:26 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:04.016 21:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.016 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:08:04.016 21:23:26 -- target/referrals.sh@21 -- # sort 00:08:04.016 21:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.016 21:23:26 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:04.016 21:23:26 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:04.016 21:23:26 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:04.016 21:23:26 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.016 21:23:26 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.016 21:23:26 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.016 21:23:26 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.016 21:23:26 -- target/referrals.sh@26 -- # sort 00:08:04.275 21:23:26 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:04.275 21:23:26 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:04.275 21:23:26 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:04.275 21:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.275 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:08:04.275 21:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.275 21:23:26 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:04.275 21:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.275 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:08:04.275 21:23:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.275 21:23:26 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:04.275 21:23:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.275 21:23:26 -- common/autotest_common.sh@10 -- # set +x 00:08:04.275 21:23:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.275 21:23:27 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.275 21:23:27 -- target/referrals.sh@56 -- # jq length 00:08:04.275 21:23:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.275 21:23:27 -- common/autotest_common.sh@10 -- # set +x 00:08:04.275 21:23:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.275 21:23:27 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:04.275 21:23:27 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:04.275 21:23:27 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.275 21:23:27 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.275 21:23:27 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.275 21:23:27 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.275 21:23:27 -- target/referrals.sh@26 -- # sort 00:08:04.534 21:23:27 -- target/referrals.sh@26 -- # echo 00:08:04.534 21:23:27 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:04.534 21:23:27 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:04.534 21:23:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.534 21:23:27 -- common/autotest_common.sh@10 -- # set +x 00:08:04.534 21:23:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.534 21:23:27 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:04.534 21:23:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.534 21:23:27 -- common/autotest_common.sh@10 -- # set +x 00:08:04.534 21:23:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.534 21:23:27 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:04.534 21:23:27 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:04.534 21:23:27 -- target/referrals.sh@21 -- # sort 00:08:04.534 21:23:27 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.534 21:23:27 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:04.534 21:23:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.534 21:23:27 -- common/autotest_common.sh@10 -- # set +x 00:08:04.534 21:23:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.534 21:23:27 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:04.534 21:23:27 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:04.534 21:23:27 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:04.535 21:23:27 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.535 21:23:27 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.535 21:23:27 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.535 21:23:27 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.535 21:23:27 -- target/referrals.sh@26 -- # sort 00:08:04.535 21:23:27 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:04.535 21:23:27 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:04.535 21:23:27 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:04.535 21:23:27 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:04.535 21:23:27 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:04.535 21:23:27 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.535 21:23:27 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:04.794 21:23:27 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:04.794 21:23:27 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:04.794 21:23:27 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:04.794 21:23:27 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:04.794 21:23:27 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.794 21:23:27 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:04.794 21:23:27 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:04.794 21:23:27 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:04.794 21:23:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.794 21:23:27 -- common/autotest_common.sh@10 -- # set +x 00:08:04.794 21:23:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.794 21:23:27 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:04.794 21:23:27 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:04.794 21:23:27 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.794 21:23:27 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:04.794 21:23:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:04.794 21:23:27 -- common/autotest_common.sh@10 -- # set +x 00:08:04.794 21:23:27 -- target/referrals.sh@21 -- # sort 00:08:04.794 21:23:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:04.794 21:23:27 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:04.794 21:23:27 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:04.794 21:23:27 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:04.794 21:23:27 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.794 21:23:27 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.794 21:23:27 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.794 21:23:27 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.794 21:23:27 -- target/referrals.sh@26 -- # sort 00:08:05.052 21:23:27 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:05.052 21:23:27 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:05.052 21:23:27 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:05.052 21:23:27 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:05.052 21:23:27 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:05.052 21:23:27 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.052 21:23:27 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:05.311 21:23:27 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:05.311 21:23:27 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:05.311 21:23:27 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:05.311 21:23:27 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:05.311 21:23:27 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.311 21:23:27 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:05.311 21:23:28 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:05.311 21:23:28 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:05.311 21:23:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.311 21:23:28 -- common/autotest_common.sh@10 -- # set +x 00:08:05.311 21:23:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.311 21:23:28 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:05.311 21:23:28 -- target/referrals.sh@82 -- # jq length 00:08:05.311 21:23:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.311 21:23:28 -- common/autotest_common.sh@10 -- # set +x 00:08:05.311 21:23:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.311 21:23:28 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:05.312 21:23:28 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:05.312 21:23:28 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:05.312 21:23:28 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:05.312 21:23:28 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.312 21:23:28 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:05.312 21:23:28 -- target/referrals.sh@26 -- # sort 00:08:05.571 21:23:28 -- target/referrals.sh@26 -- # echo 00:08:05.571 21:23:28 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:05.571 21:23:28 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:05.571 21:23:28 -- target/referrals.sh@86 -- # nvmftestfini 00:08:05.571 21:23:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:05.571 21:23:28 -- nvmf/common.sh@117 -- # sync 00:08:05.571 21:23:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:05.571 21:23:28 -- nvmf/common.sh@120 -- # set +e 00:08:05.571 21:23:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:05.571 21:23:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:05.571 rmmod nvme_tcp 00:08:05.571 rmmod nvme_fabrics 00:08:05.571 rmmod nvme_keyring 00:08:05.571 21:23:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:05.571 21:23:28 -- nvmf/common.sh@124 -- # set -e 00:08:05.571 21:23:28 -- nvmf/common.sh@125 -- # return 0 00:08:05.571 21:23:28 -- nvmf/common.sh@478 -- # '[' -n 2720713 ']' 00:08:05.571 21:23:28 -- nvmf/common.sh@479 -- # killprocess 2720713 00:08:05.571 21:23:28 -- common/autotest_common.sh@936 -- # '[' -z 2720713 ']' 00:08:05.571 21:23:28 -- common/autotest_common.sh@940 -- # kill -0 2720713 00:08:05.571 21:23:28 -- common/autotest_common.sh@941 -- # uname 00:08:05.571 21:23:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:05.571 21:23:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2720713 00:08:05.571 21:23:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:05.571 21:23:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:05.571 21:23:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2720713' 00:08:05.571 killing process with pid 2720713 00:08:05.571 21:23:28 -- common/autotest_common.sh@955 -- # kill 2720713 00:08:05.571 21:23:28 -- common/autotest_common.sh@960 -- # wait 2720713 00:08:05.830 21:23:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:05.830 21:23:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:05.830 21:23:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:05.830 21:23:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:05.830 21:23:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:05.830 21:23:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.830 21:23:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.830 21:23:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.366 21:23:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:08.366 00:08:08.366 real 0m11.984s 00:08:08.366 user 0m13.137s 00:08:08.366 sys 0m6.156s 00:08:08.366 21:23:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:08.366 21:23:30 -- common/autotest_common.sh@10 -- # set +x 00:08:08.366 ************************************ 00:08:08.366 END TEST nvmf_referrals 00:08:08.366 ************************************ 00:08:08.366 21:23:30 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:08.366 21:23:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:08.366 21:23:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.366 21:23:30 -- common/autotest_common.sh@10 -- # set +x 00:08:08.366 ************************************ 00:08:08.366 START TEST nvmf_connect_disconnect 00:08:08.366 ************************************ 00:08:08.366 21:23:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:08.366 * Looking for test storage... 00:08:08.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.366 21:23:30 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.366 21:23:30 -- nvmf/common.sh@7 -- # uname -s 00:08:08.366 21:23:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.366 21:23:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.366 21:23:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.366 21:23:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.366 21:23:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.366 21:23:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.366 21:23:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.366 21:23:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.366 21:23:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.366 21:23:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.366 21:23:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:08.366 21:23:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:08.366 21:23:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.366 21:23:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.366 21:23:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.366 21:23:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.366 21:23:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.366 21:23:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.366 21:23:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.366 21:23:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.366 21:23:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.366 21:23:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.366 21:23:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.366 21:23:31 -- paths/export.sh@5 -- # export PATH 00:08:08.367 21:23:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.367 21:23:31 -- nvmf/common.sh@47 -- # : 0 00:08:08.367 21:23:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:08.367 21:23:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:08.367 21:23:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.367 21:23:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.367 21:23:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.367 21:23:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:08.367 21:23:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:08.367 21:23:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:08.367 21:23:31 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:08.367 21:23:31 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:08.367 21:23:31 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:08.367 21:23:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:08.367 21:23:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.367 21:23:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:08.367 21:23:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:08.367 21:23:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:08.367 21:23:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.367 21:23:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.367 21:23:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.367 21:23:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:08.367 21:23:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:08.367 21:23:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:08.367 21:23:31 -- common/autotest_common.sh@10 -- # set +x 00:08:14.955 21:23:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:14.955 21:23:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:14.955 21:23:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:14.955 21:23:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:14.955 21:23:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:14.955 21:23:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:14.955 21:23:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:14.955 21:23:37 -- nvmf/common.sh@295 -- # net_devs=() 00:08:14.955 21:23:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:14.955 21:23:37 -- nvmf/common.sh@296 -- # e810=() 00:08:14.955 21:23:37 -- nvmf/common.sh@296 -- # local -ga e810 00:08:14.955 21:23:37 -- nvmf/common.sh@297 -- # x722=() 00:08:14.955 21:23:37 -- nvmf/common.sh@297 -- # local -ga x722 00:08:14.955 21:23:37 -- nvmf/common.sh@298 -- # mlx=() 00:08:14.955 21:23:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:14.955 21:23:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.955 21:23:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.955 21:23:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.955 21:23:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.955 21:23:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.955 21:23:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.955 21:23:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.955 21:23:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.955 21:23:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.955 21:23:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.955 21:23:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.955 21:23:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:14.955 21:23:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:14.955 21:23:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:14.955 21:23:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:14.955 21:23:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:14.955 21:23:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:14.955 21:23:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.955 21:23:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:14.955 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:14.955 21:23:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.955 21:23:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.955 21:23:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.955 21:23:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.955 21:23:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.955 21:23:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.955 21:23:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:14.955 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:14.955 21:23:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.955 21:23:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.955 21:23:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.955 21:23:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.955 21:23:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.955 21:23:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:14.955 21:23:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:14.955 21:23:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:14.955 21:23:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.955 21:23:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.955 21:23:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:14.955 21:23:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.955 21:23:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:14.955 Found net devices under 0000:af:00.0: cvl_0_0 00:08:14.955 21:23:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.955 21:23:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.955 21:23:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.955 21:23:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:14.955 21:23:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.955 21:23:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:14.955 Found net devices under 0000:af:00.1: cvl_0_1 00:08:14.955 21:23:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.955 21:23:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:14.955 21:23:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:14.955 21:23:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:14.955 21:23:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:14.955 21:23:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:14.955 21:23:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.955 21:23:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.955 21:23:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.955 21:23:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:14.955 21:23:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.955 21:23:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.955 21:23:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:14.955 21:23:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.955 21:23:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.955 21:23:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:14.956 21:23:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:15.231 21:23:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.231 21:23:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.231 21:23:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.231 21:23:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.231 21:23:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:15.231 21:23:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.231 21:23:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.231 21:23:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.489 21:23:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:15.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:08:15.489 00:08:15.489 --- 10.0.0.2 ping statistics --- 00:08:15.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.489 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:08:15.489 21:23:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:08:15.489 00:08:15.489 --- 10.0.0.1 ping statistics --- 00:08:15.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.489 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:08:15.489 21:23:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.489 21:23:38 -- nvmf/common.sh@411 -- # return 0 00:08:15.489 21:23:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:15.489 21:23:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.489 21:23:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:15.489 21:23:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:15.489 21:23:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.489 21:23:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:15.489 21:23:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:15.489 21:23:38 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:15.489 21:23:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:15.489 21:23:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:15.489 21:23:38 -- common/autotest_common.sh@10 -- # set +x 00:08:15.489 21:23:38 -- nvmf/common.sh@470 -- # nvmfpid=2725047 00:08:15.489 21:23:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:15.490 21:23:38 -- nvmf/common.sh@471 -- # waitforlisten 2725047 00:08:15.490 21:23:38 -- common/autotest_common.sh@817 -- # '[' -z 2725047 ']' 00:08:15.490 21:23:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.490 21:23:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:15.490 21:23:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.490 21:23:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:15.490 21:23:38 -- common/autotest_common.sh@10 -- # set +x 00:08:15.490 [2024-04-24 21:23:38.246121] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:08:15.490 [2024-04-24 21:23:38.246167] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.490 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.490 [2024-04-24 21:23:38.318765] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.748 [2024-04-24 21:23:38.392356] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.748 [2024-04-24 21:23:38.392395] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.748 [2024-04-24 21:23:38.392405] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.748 [2024-04-24 21:23:38.392414] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.748 [2024-04-24 21:23:38.392421] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.748 [2024-04-24 21:23:38.392497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.748 [2024-04-24 21:23:38.392532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.748 [2024-04-24 21:23:38.392619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.748 [2024-04-24 21:23:38.392621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.314 21:23:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:16.314 21:23:39 -- common/autotest_common.sh@850 -- # return 0 00:08:16.314 21:23:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:16.314 21:23:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:16.314 21:23:39 -- common/autotest_common.sh@10 -- # set +x 00:08:16.314 21:23:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.314 21:23:39 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:16.314 21:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.314 21:23:39 -- common/autotest_common.sh@10 -- # set +x 00:08:16.314 [2024-04-24 21:23:39.090344] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.314 21:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.314 21:23:39 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:16.314 21:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.314 21:23:39 -- common/autotest_common.sh@10 -- # set +x 00:08:16.314 21:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.314 21:23:39 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:16.314 21:23:39 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:16.314 21:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.314 21:23:39 -- common/autotest_common.sh@10 -- # set +x 00:08:16.314 21:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.314 21:23:39 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:16.314 21:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.314 21:23:39 -- common/autotest_common.sh@10 -- # set +x 00:08:16.314 21:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.314 21:23:39 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.314 21:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.314 21:23:39 -- common/autotest_common.sh@10 -- # set +x 00:08:16.314 [2024-04-24 21:23:39.145192] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.314 21:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.314 21:23:39 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:16.314 21:23:39 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:16.314 21:23:39 -- target/connect_disconnect.sh@34 -- # set +x 00:08:20.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.654 21:23:56 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:33.654 21:23:56 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:33.654 21:23:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:33.654 21:23:56 -- nvmf/common.sh@117 -- # sync 00:08:33.654 21:23:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.654 21:23:56 -- nvmf/common.sh@120 -- # set +e 00:08:33.654 21:23:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.654 21:23:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.654 rmmod nvme_tcp 00:08:33.654 rmmod nvme_fabrics 00:08:33.913 rmmod nvme_keyring 00:08:33.913 21:23:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.913 21:23:56 -- nvmf/common.sh@124 -- # set -e 00:08:33.913 21:23:56 -- nvmf/common.sh@125 -- # return 0 00:08:33.913 21:23:56 -- nvmf/common.sh@478 -- # '[' -n 2725047 ']' 00:08:33.913 21:23:56 -- nvmf/common.sh@479 -- # killprocess 2725047 00:08:33.913 21:23:56 -- common/autotest_common.sh@936 -- # '[' -z 2725047 ']' 00:08:33.913 21:23:56 -- common/autotest_common.sh@940 -- # kill -0 2725047 00:08:33.913 21:23:56 -- common/autotest_common.sh@941 -- # uname 00:08:33.913 21:23:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:33.913 21:23:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2725047 00:08:33.913 21:23:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:33.913 21:23:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:33.913 21:23:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2725047' 00:08:33.913 killing process with pid 2725047 00:08:33.913 21:23:56 -- common/autotest_common.sh@955 -- # kill 2725047 00:08:33.913 21:23:56 -- common/autotest_common.sh@960 -- # wait 2725047 00:08:34.172 21:23:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:34.172 21:23:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:34.172 21:23:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:34.172 21:23:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.172 21:23:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:34.172 21:23:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.172 21:23:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.172 21:23:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.077 21:23:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:36.077 00:08:36.077 real 0m28.051s 00:08:36.077 user 1m14.669s 00:08:36.077 sys 0m7.486s 00:08:36.077 21:23:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:36.077 21:23:58 -- common/autotest_common.sh@10 -- # set +x 00:08:36.077 ************************************ 00:08:36.077 END TEST nvmf_connect_disconnect 00:08:36.077 ************************************ 00:08:36.337 21:23:58 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:36.337 21:23:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:36.337 21:23:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.337 21:23:58 -- common/autotest_common.sh@10 -- # set +x 00:08:36.337 ************************************ 00:08:36.337 START TEST nvmf_multitarget 00:08:36.337 ************************************ 00:08:36.337 21:23:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:36.597 * Looking for test storage... 00:08:36.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:36.597 21:23:59 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.597 21:23:59 -- nvmf/common.sh@7 -- # uname -s 00:08:36.597 21:23:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.597 21:23:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.597 21:23:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.597 21:23:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.597 21:23:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.597 21:23:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.597 21:23:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.597 21:23:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.597 21:23:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.597 21:23:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.597 21:23:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:36.597 21:23:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:36.597 21:23:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.597 21:23:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.597 21:23:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.597 21:23:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.597 21:23:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:36.597 21:23:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.597 21:23:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.597 21:23:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.597 21:23:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.597 21:23:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.597 21:23:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.597 21:23:59 -- paths/export.sh@5 -- # export PATH 00:08:36.597 21:23:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.597 21:23:59 -- nvmf/common.sh@47 -- # : 0 00:08:36.597 21:23:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:36.597 21:23:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:36.597 21:23:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.597 21:23:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.597 21:23:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.597 21:23:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:36.597 21:23:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:36.597 21:23:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:36.597 21:23:59 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:36.597 21:23:59 -- target/multitarget.sh@15 -- # nvmftestinit 00:08:36.597 21:23:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:36.597 21:23:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.597 21:23:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:36.597 21:23:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:36.597 21:23:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:36.597 21:23:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.597 21:23:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.598 21:23:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.598 21:23:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:36.598 21:23:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:36.598 21:23:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:36.598 21:23:59 -- common/autotest_common.sh@10 -- # set +x 00:08:43.169 21:24:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:43.169 21:24:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:43.169 21:24:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:43.169 21:24:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:43.169 21:24:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:43.169 21:24:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:43.169 21:24:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:43.169 21:24:05 -- nvmf/common.sh@295 -- # net_devs=() 00:08:43.169 21:24:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:43.169 21:24:05 -- nvmf/common.sh@296 -- # e810=() 00:08:43.169 21:24:05 -- nvmf/common.sh@296 -- # local -ga e810 00:08:43.169 21:24:05 -- nvmf/common.sh@297 -- # x722=() 00:08:43.169 21:24:05 -- nvmf/common.sh@297 -- # local -ga x722 00:08:43.169 21:24:05 -- nvmf/common.sh@298 -- # mlx=() 00:08:43.169 21:24:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:43.169 21:24:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.169 21:24:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.169 21:24:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.169 21:24:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.169 21:24:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.169 21:24:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.169 21:24:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.169 21:24:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.169 21:24:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.169 21:24:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.169 21:24:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.169 21:24:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:43.169 21:24:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:43.169 21:24:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:43.169 21:24:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:43.169 21:24:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:43.169 21:24:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:43.169 21:24:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.169 21:24:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:43.169 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:43.169 21:24:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:43.169 21:24:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:43.169 21:24:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.169 21:24:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.169 21:24:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:43.169 21:24:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.169 21:24:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:43.169 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:43.169 21:24:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:43.169 21:24:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:43.169 21:24:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.169 21:24:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.169 21:24:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:43.169 21:24:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:43.169 21:24:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:43.169 21:24:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:43.169 21:24:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.169 21:24:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.169 21:24:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:43.169 21:24:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.169 21:24:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:43.169 Found net devices under 0000:af:00.0: cvl_0_0 00:08:43.169 21:24:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.169 21:24:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.169 21:24:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.169 21:24:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:43.169 21:24:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.169 21:24:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:43.169 Found net devices under 0000:af:00.1: cvl_0_1 00:08:43.169 21:24:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.169 21:24:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:43.169 21:24:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:43.169 21:24:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:43.169 21:24:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:43.169 21:24:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:43.169 21:24:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.169 21:24:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.169 21:24:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:43.169 21:24:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:43.169 21:24:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:43.169 21:24:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:43.169 21:24:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:43.170 21:24:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:43.170 21:24:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.170 21:24:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:43.170 21:24:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:43.170 21:24:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:43.170 21:24:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:43.170 21:24:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:43.170 21:24:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:43.452 21:24:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:43.452 21:24:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.452 21:24:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.452 21:24:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.452 21:24:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:43.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:08:43.452 00:08:43.452 --- 10.0.0.2 ping statistics --- 00:08:43.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.452 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:08:43.453 21:24:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:43.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:08:43.453 00:08:43.453 --- 10.0.0.1 ping statistics --- 00:08:43.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.453 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:08:43.453 21:24:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.453 21:24:06 -- nvmf/common.sh@411 -- # return 0 00:08:43.453 21:24:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:43.453 21:24:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.453 21:24:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:43.453 21:24:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:43.453 21:24:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.453 21:24:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:43.453 21:24:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:43.453 21:24:06 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:43.453 21:24:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:43.453 21:24:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:43.453 21:24:06 -- common/autotest_common.sh@10 -- # set +x 00:08:43.453 21:24:06 -- nvmf/common.sh@470 -- # nvmfpid=2732611 00:08:43.453 21:24:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:43.453 21:24:06 -- nvmf/common.sh@471 -- # waitforlisten 2732611 00:08:43.453 21:24:06 -- common/autotest_common.sh@817 -- # '[' -z 2732611 ']' 00:08:43.453 21:24:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.453 21:24:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:43.453 21:24:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.453 21:24:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:43.453 21:24:06 -- common/autotest_common.sh@10 -- # set +x 00:08:43.453 [2024-04-24 21:24:06.302698] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:08:43.453 [2024-04-24 21:24:06.302745] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.453 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.712 [2024-04-24 21:24:06.375487] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.712 [2024-04-24 21:24:06.452194] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.712 [2024-04-24 21:24:06.452230] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.712 [2024-04-24 21:24:06.452240] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.712 [2024-04-24 21:24:06.452249] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.712 [2024-04-24 21:24:06.452272] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.712 [2024-04-24 21:24:06.452324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.712 [2024-04-24 21:24:06.452416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.712 [2024-04-24 21:24:06.452501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.712 [2024-04-24 21:24:06.452503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.277 21:24:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:44.277 21:24:07 -- common/autotest_common.sh@850 -- # return 0 00:08:44.277 21:24:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:44.277 21:24:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:44.277 21:24:07 -- common/autotest_common.sh@10 -- # set +x 00:08:44.277 21:24:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.277 21:24:07 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:44.277 21:24:07 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:44.277 21:24:07 -- target/multitarget.sh@21 -- # jq length 00:08:44.536 21:24:07 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:44.537 21:24:07 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:44.537 "nvmf_tgt_1" 00:08:44.537 21:24:07 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:44.795 "nvmf_tgt_2" 00:08:44.795 21:24:07 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:44.795 21:24:07 -- target/multitarget.sh@28 -- # jq length 00:08:44.795 21:24:07 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:44.795 21:24:07 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:44.795 true 00:08:44.795 21:24:07 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:45.053 true 00:08:45.053 21:24:07 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:45.053 21:24:07 -- target/multitarget.sh@35 -- # jq length 00:08:45.053 21:24:07 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:45.053 21:24:07 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:45.053 21:24:07 -- target/multitarget.sh@41 -- # nvmftestfini 00:08:45.053 21:24:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:45.053 21:24:07 -- nvmf/common.sh@117 -- # sync 00:08:45.053 21:24:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:45.053 21:24:07 -- nvmf/common.sh@120 -- # set +e 00:08:45.053 21:24:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.053 21:24:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:45.053 rmmod nvme_tcp 00:08:45.053 rmmod nvme_fabrics 00:08:45.053 rmmod nvme_keyring 00:08:45.053 21:24:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.053 21:24:07 -- nvmf/common.sh@124 -- # set -e 00:08:45.053 21:24:07 -- nvmf/common.sh@125 -- # return 0 00:08:45.053 21:24:07 -- nvmf/common.sh@478 -- # '[' -n 2732611 ']' 00:08:45.053 21:24:07 -- nvmf/common.sh@479 -- # killprocess 2732611 00:08:45.053 21:24:07 -- common/autotest_common.sh@936 -- # '[' -z 2732611 ']' 00:08:45.053 21:24:07 -- common/autotest_common.sh@940 -- # kill -0 2732611 00:08:45.053 21:24:07 -- common/autotest_common.sh@941 -- # uname 00:08:45.053 21:24:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:45.053 21:24:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2732611 00:08:45.312 21:24:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:45.312 21:24:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:45.312 21:24:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2732611' 00:08:45.312 killing process with pid 2732611 00:08:45.312 21:24:07 -- common/autotest_common.sh@955 -- # kill 2732611 00:08:45.312 21:24:07 -- common/autotest_common.sh@960 -- # wait 2732611 00:08:45.312 21:24:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:45.312 21:24:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:45.312 21:24:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:45.312 21:24:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.312 21:24:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:45.312 21:24:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.312 21:24:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.312 21:24:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.868 21:24:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:47.868 00:08:47.868 real 0m11.131s 00:08:47.868 user 0m9.504s 00:08:47.868 sys 0m5.821s 00:08:47.868 21:24:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:47.868 21:24:10 -- common/autotest_common.sh@10 -- # set +x 00:08:47.868 ************************************ 00:08:47.868 END TEST nvmf_multitarget 00:08:47.868 ************************************ 00:08:47.869 21:24:10 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:47.869 21:24:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:47.869 21:24:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:47.869 21:24:10 -- common/autotest_common.sh@10 -- # set +x 00:08:47.869 ************************************ 00:08:47.869 START TEST nvmf_rpc 00:08:47.869 ************************************ 00:08:47.869 21:24:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:47.869 * Looking for test storage... 00:08:47.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.869 21:24:10 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.869 21:24:10 -- nvmf/common.sh@7 -- # uname -s 00:08:47.869 21:24:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.869 21:24:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.869 21:24:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.869 21:24:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.869 21:24:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.869 21:24:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.869 21:24:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.869 21:24:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.869 21:24:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.869 21:24:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.869 21:24:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:47.869 21:24:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:47.869 21:24:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.869 21:24:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.869 21:24:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.869 21:24:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.869 21:24:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.869 21:24:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.869 21:24:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.869 21:24:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.869 21:24:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.869 21:24:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.869 21:24:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.869 21:24:10 -- paths/export.sh@5 -- # export PATH 00:08:47.869 21:24:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.869 21:24:10 -- nvmf/common.sh@47 -- # : 0 00:08:47.869 21:24:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:47.869 21:24:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:47.869 21:24:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.869 21:24:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.869 21:24:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.869 21:24:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:47.869 21:24:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:47.869 21:24:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:47.869 21:24:10 -- target/rpc.sh@11 -- # loops=5 00:08:47.869 21:24:10 -- target/rpc.sh@23 -- # nvmftestinit 00:08:47.869 21:24:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:47.869 21:24:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.869 21:24:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:47.869 21:24:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:47.869 21:24:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:47.869 21:24:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.869 21:24:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:47.869 21:24:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.869 21:24:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:47.869 21:24:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:47.869 21:24:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:47.869 21:24:10 -- common/autotest_common.sh@10 -- # set +x 00:08:54.433 21:24:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:54.433 21:24:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:54.433 21:24:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:54.433 21:24:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:54.433 21:24:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:54.433 21:24:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:54.433 21:24:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:54.433 21:24:16 -- nvmf/common.sh@295 -- # net_devs=() 00:08:54.433 21:24:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:54.433 21:24:16 -- nvmf/common.sh@296 -- # e810=() 00:08:54.433 21:24:16 -- nvmf/common.sh@296 -- # local -ga e810 00:08:54.433 21:24:16 -- nvmf/common.sh@297 -- # x722=() 00:08:54.433 21:24:16 -- nvmf/common.sh@297 -- # local -ga x722 00:08:54.433 21:24:16 -- nvmf/common.sh@298 -- # mlx=() 00:08:54.433 21:24:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:54.433 21:24:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.433 21:24:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.433 21:24:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.433 21:24:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.433 21:24:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.433 21:24:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.433 21:24:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.433 21:24:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.433 21:24:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.433 21:24:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.433 21:24:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.433 21:24:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:54.433 21:24:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:54.434 21:24:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:54.434 21:24:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:54.434 21:24:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:54.434 21:24:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:54.434 21:24:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.434 21:24:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:54.434 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:54.434 21:24:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.434 21:24:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.434 21:24:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.434 21:24:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.434 21:24:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.434 21:24:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.434 21:24:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:54.434 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:54.434 21:24:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.434 21:24:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.434 21:24:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.434 21:24:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.434 21:24:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.434 21:24:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:54.434 21:24:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:54.434 21:24:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:54.434 21:24:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.434 21:24:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.434 21:24:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:54.434 21:24:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.434 21:24:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:54.434 Found net devices under 0000:af:00.0: cvl_0_0 00:08:54.434 21:24:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.434 21:24:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.434 21:24:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.434 21:24:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:54.434 21:24:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.434 21:24:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:54.434 Found net devices under 0000:af:00.1: cvl_0_1 00:08:54.434 21:24:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.434 21:24:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:54.434 21:24:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:54.434 21:24:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:54.434 21:24:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:54.434 21:24:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:54.434 21:24:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.434 21:24:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.434 21:24:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.434 21:24:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:54.434 21:24:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.434 21:24:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.434 21:24:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:54.434 21:24:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.434 21:24:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.434 21:24:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:54.434 21:24:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:54.434 21:24:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.434 21:24:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.434 21:24:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.434 21:24:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.434 21:24:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:54.434 21:24:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.434 21:24:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.434 21:24:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.434 21:24:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:54.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:08:54.434 00:08:54.434 --- 10.0.0.2 ping statistics --- 00:08:54.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.434 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:08:54.434 21:24:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:08:54.434 00:08:54.434 --- 10.0.0.1 ping statistics --- 00:08:54.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.434 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:08:54.434 21:24:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.434 21:24:17 -- nvmf/common.sh@411 -- # return 0 00:08:54.434 21:24:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:54.434 21:24:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.434 21:24:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:54.434 21:24:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:54.434 21:24:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.434 21:24:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:54.434 21:24:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:54.434 21:24:17 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:54.434 21:24:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:54.434 21:24:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:54.434 21:24:17 -- common/autotest_common.sh@10 -- # set +x 00:08:54.434 21:24:17 -- nvmf/common.sh@470 -- # nvmfpid=2736607 00:08:54.434 21:24:17 -- nvmf/common.sh@471 -- # waitforlisten 2736607 00:08:54.434 21:24:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:54.434 21:24:17 -- common/autotest_common.sh@817 -- # '[' -z 2736607 ']' 00:08:54.434 21:24:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.434 21:24:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:54.434 21:24:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.434 21:24:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:54.434 21:24:17 -- common/autotest_common.sh@10 -- # set +x 00:08:54.434 [2024-04-24 21:24:17.264004] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:08:54.434 [2024-04-24 21:24:17.264051] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.434 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.692 [2024-04-24 21:24:17.339697] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:54.692 [2024-04-24 21:24:17.413463] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.692 [2024-04-24 21:24:17.413500] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.692 [2024-04-24 21:24:17.413510] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.692 [2024-04-24 21:24:17.413519] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.692 [2024-04-24 21:24:17.413542] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.692 [2024-04-24 21:24:17.413594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.692 [2024-04-24 21:24:17.413687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.693 [2024-04-24 21:24:17.413768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.693 [2024-04-24 21:24:17.413769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.258 21:24:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:55.258 21:24:18 -- common/autotest_common.sh@850 -- # return 0 00:08:55.258 21:24:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:55.258 21:24:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:55.258 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:08:55.258 21:24:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.258 21:24:18 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:55.258 21:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.258 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:08:55.258 21:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.258 21:24:18 -- target/rpc.sh@26 -- # stats='{ 00:08:55.258 "tick_rate": 2500000000, 00:08:55.258 "poll_groups": [ 00:08:55.258 { 00:08:55.258 "name": "nvmf_tgt_poll_group_0", 00:08:55.258 "admin_qpairs": 0, 00:08:55.258 "io_qpairs": 0, 00:08:55.258 "current_admin_qpairs": 0, 00:08:55.258 "current_io_qpairs": 0, 00:08:55.258 "pending_bdev_io": 0, 00:08:55.258 "completed_nvme_io": 0, 00:08:55.258 "transports": [] 00:08:55.258 }, 00:08:55.258 { 00:08:55.258 "name": "nvmf_tgt_poll_group_1", 00:08:55.258 "admin_qpairs": 0, 00:08:55.258 "io_qpairs": 0, 00:08:55.258 "current_admin_qpairs": 0, 00:08:55.258 "current_io_qpairs": 0, 00:08:55.258 "pending_bdev_io": 0, 00:08:55.258 "completed_nvme_io": 0, 00:08:55.258 "transports": [] 00:08:55.258 }, 00:08:55.258 { 00:08:55.258 "name": "nvmf_tgt_poll_group_2", 00:08:55.258 "admin_qpairs": 0, 00:08:55.258 "io_qpairs": 0, 00:08:55.258 "current_admin_qpairs": 0, 00:08:55.258 "current_io_qpairs": 0, 00:08:55.258 "pending_bdev_io": 0, 00:08:55.258 "completed_nvme_io": 0, 00:08:55.258 "transports": [] 00:08:55.258 }, 00:08:55.258 { 00:08:55.258 "name": "nvmf_tgt_poll_group_3", 00:08:55.258 "admin_qpairs": 0, 00:08:55.258 "io_qpairs": 0, 00:08:55.258 "current_admin_qpairs": 0, 00:08:55.258 "current_io_qpairs": 0, 00:08:55.258 "pending_bdev_io": 0, 00:08:55.258 "completed_nvme_io": 0, 00:08:55.258 "transports": [] 00:08:55.258 } 00:08:55.258 ] 00:08:55.258 }' 00:08:55.258 21:24:18 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:55.258 21:24:18 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:55.258 21:24:18 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:55.258 21:24:18 -- target/rpc.sh@15 -- # wc -l 00:08:55.516 21:24:18 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:55.516 21:24:18 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:55.516 21:24:18 -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:55.516 21:24:18 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:55.516 21:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.516 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:08:55.516 [2024-04-24 21:24:18.232740] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.516 21:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.516 21:24:18 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:55.516 21:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.516 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:08:55.516 21:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.516 21:24:18 -- target/rpc.sh@33 -- # stats='{ 00:08:55.516 "tick_rate": 2500000000, 00:08:55.516 "poll_groups": [ 00:08:55.516 { 00:08:55.516 "name": "nvmf_tgt_poll_group_0", 00:08:55.516 "admin_qpairs": 0, 00:08:55.516 "io_qpairs": 0, 00:08:55.516 "current_admin_qpairs": 0, 00:08:55.516 "current_io_qpairs": 0, 00:08:55.516 "pending_bdev_io": 0, 00:08:55.516 "completed_nvme_io": 0, 00:08:55.516 "transports": [ 00:08:55.516 { 00:08:55.516 "trtype": "TCP" 00:08:55.516 } 00:08:55.516 ] 00:08:55.516 }, 00:08:55.516 { 00:08:55.516 "name": "nvmf_tgt_poll_group_1", 00:08:55.516 "admin_qpairs": 0, 00:08:55.516 "io_qpairs": 0, 00:08:55.516 "current_admin_qpairs": 0, 00:08:55.516 "current_io_qpairs": 0, 00:08:55.516 "pending_bdev_io": 0, 00:08:55.516 "completed_nvme_io": 0, 00:08:55.516 "transports": [ 00:08:55.516 { 00:08:55.516 "trtype": "TCP" 00:08:55.516 } 00:08:55.516 ] 00:08:55.516 }, 00:08:55.516 { 00:08:55.516 "name": "nvmf_tgt_poll_group_2", 00:08:55.516 "admin_qpairs": 0, 00:08:55.516 "io_qpairs": 0, 00:08:55.516 "current_admin_qpairs": 0, 00:08:55.516 "current_io_qpairs": 0, 00:08:55.516 "pending_bdev_io": 0, 00:08:55.516 "completed_nvme_io": 0, 00:08:55.516 "transports": [ 00:08:55.516 { 00:08:55.516 "trtype": "TCP" 00:08:55.516 } 00:08:55.516 ] 00:08:55.516 }, 00:08:55.516 { 00:08:55.516 "name": "nvmf_tgt_poll_group_3", 00:08:55.516 "admin_qpairs": 0, 00:08:55.516 "io_qpairs": 0, 00:08:55.516 "current_admin_qpairs": 0, 00:08:55.516 "current_io_qpairs": 0, 00:08:55.516 "pending_bdev_io": 0, 00:08:55.516 "completed_nvme_io": 0, 00:08:55.516 "transports": [ 00:08:55.516 { 00:08:55.516 "trtype": "TCP" 00:08:55.516 } 00:08:55.516 ] 00:08:55.516 } 00:08:55.516 ] 00:08:55.516 }' 00:08:55.516 21:24:18 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:55.516 21:24:18 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:55.516 21:24:18 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:55.516 21:24:18 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:55.516 21:24:18 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:55.516 21:24:18 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:55.516 21:24:18 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:55.516 21:24:18 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:55.516 21:24:18 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:55.516 21:24:18 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:55.516 21:24:18 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:55.516 21:24:18 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:55.516 21:24:18 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:55.516 21:24:18 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:55.516 21:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.516 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:08:55.516 Malloc1 00:08:55.516 21:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.516 21:24:18 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:55.516 21:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.516 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:08:55.516 21:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.516 21:24:18 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:55.516 21:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.516 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:08:55.516 21:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.516 21:24:18 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:55.516 21:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.516 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:08:55.774 21:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.774 21:24:18 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.774 21:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.774 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:08:55.774 [2024-04-24 21:24:18.411759] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.774 21:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.774 21:24:18 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:08:55.774 21:24:18 -- common/autotest_common.sh@638 -- # local es=0 00:08:55.774 21:24:18 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:08:55.774 21:24:18 -- common/autotest_common.sh@626 -- # local arg=nvme 00:08:55.774 21:24:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:55.774 21:24:18 -- common/autotest_common.sh@630 -- # type -t nvme 00:08:55.774 21:24:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:55.774 21:24:18 -- common/autotest_common.sh@632 -- # type -P nvme 00:08:55.774 21:24:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:55.774 21:24:18 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:08:55.774 21:24:18 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:08:55.775 21:24:18 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:08:55.775 [2024-04-24 21:24:18.440652] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:08:55.775 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:55.775 could not add new controller: failed to write to nvme-fabrics device 00:08:55.775 21:24:18 -- common/autotest_common.sh@641 -- # es=1 00:08:55.775 21:24:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:55.775 21:24:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:55.775 21:24:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:55.775 21:24:18 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:55.775 21:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.775 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:08:55.775 21:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.775 21:24:18 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:57.149 21:24:19 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:57.149 21:24:19 -- common/autotest_common.sh@1184 -- # local i=0 00:08:57.149 21:24:19 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:57.149 21:24:19 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:57.149 21:24:19 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:59.048 21:24:21 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:59.048 21:24:21 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:59.048 21:24:21 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:59.048 21:24:21 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:59.048 21:24:21 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:59.048 21:24:21 -- common/autotest_common.sh@1194 -- # return 0 00:08:59.048 21:24:21 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:59.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.048 21:24:21 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:59.048 21:24:21 -- common/autotest_common.sh@1205 -- # local i=0 00:08:59.048 21:24:21 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:59.048 21:24:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.048 21:24:21 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:59.048 21:24:21 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.306 21:24:21 -- common/autotest_common.sh@1217 -- # return 0 00:08:59.306 21:24:21 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:59.306 21:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:59.306 21:24:21 -- common/autotest_common.sh@10 -- # set +x 00:08:59.306 21:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:59.306 21:24:21 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:59.307 21:24:21 -- common/autotest_common.sh@638 -- # local es=0 00:08:59.307 21:24:21 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:59.307 21:24:21 -- common/autotest_common.sh@626 -- # local arg=nvme 00:08:59.307 21:24:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:59.307 21:24:21 -- common/autotest_common.sh@630 -- # type -t nvme 00:08:59.307 21:24:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:59.307 21:24:21 -- common/autotest_common.sh@632 -- # type -P nvme 00:08:59.307 21:24:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:59.307 21:24:21 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:08:59.307 21:24:21 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:08:59.307 21:24:21 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:59.307 [2024-04-24 21:24:21.986740] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:08:59.307 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:59.307 could not add new controller: failed to write to nvme-fabrics device 00:08:59.307 21:24:22 -- common/autotest_common.sh@641 -- # es=1 00:08:59.307 21:24:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:59.307 21:24:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:59.307 21:24:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:59.307 21:24:22 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:59.307 21:24:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:59.307 21:24:22 -- common/autotest_common.sh@10 -- # set +x 00:08:59.307 21:24:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:59.307 21:24:22 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:00.681 21:24:23 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:00.681 21:24:23 -- common/autotest_common.sh@1184 -- # local i=0 00:09:00.681 21:24:23 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:00.681 21:24:23 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:00.681 21:24:23 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:02.583 21:24:25 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:02.583 21:24:25 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:02.583 21:24:25 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:02.583 21:24:25 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:02.583 21:24:25 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:02.583 21:24:25 -- common/autotest_common.sh@1194 -- # return 0 00:09:02.583 21:24:25 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:02.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.841 21:24:25 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:02.841 21:24:25 -- common/autotest_common.sh@1205 -- # local i=0 00:09:02.841 21:24:25 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:02.841 21:24:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:02.841 21:24:25 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:02.841 21:24:25 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:02.841 21:24:25 -- common/autotest_common.sh@1217 -- # return 0 00:09:02.841 21:24:25 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:02.841 21:24:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.841 21:24:25 -- common/autotest_common.sh@10 -- # set +x 00:09:02.841 21:24:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.841 21:24:25 -- target/rpc.sh@81 -- # seq 1 5 00:09:02.841 21:24:25 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:02.841 21:24:25 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:02.841 21:24:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.841 21:24:25 -- common/autotest_common.sh@10 -- # set +x 00:09:02.841 21:24:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.841 21:24:25 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.841 21:24:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.841 21:24:25 -- common/autotest_common.sh@10 -- # set +x 00:09:02.841 [2024-04-24 21:24:25.562853] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.841 21:24:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.841 21:24:25 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:02.841 21:24:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.841 21:24:25 -- common/autotest_common.sh@10 -- # set +x 00:09:02.841 21:24:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.841 21:24:25 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:02.841 21:24:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.841 21:24:25 -- common/autotest_common.sh@10 -- # set +x 00:09:02.841 21:24:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.841 21:24:25 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:04.246 21:24:26 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:04.246 21:24:26 -- common/autotest_common.sh@1184 -- # local i=0 00:09:04.246 21:24:26 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:04.246 21:24:26 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:04.246 21:24:26 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:06.143 21:24:28 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:06.144 21:24:28 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:06.144 21:24:28 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:06.144 21:24:28 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:06.144 21:24:28 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:06.144 21:24:28 -- common/autotest_common.sh@1194 -- # return 0 00:09:06.144 21:24:28 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:06.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.402 21:24:29 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:06.402 21:24:29 -- common/autotest_common.sh@1205 -- # local i=0 00:09:06.402 21:24:29 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:06.402 21:24:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:06.402 21:24:29 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:06.402 21:24:29 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:06.402 21:24:29 -- common/autotest_common.sh@1217 -- # return 0 00:09:06.402 21:24:29 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:06.402 21:24:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:06.402 21:24:29 -- common/autotest_common.sh@10 -- # set +x 00:09:06.402 21:24:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:06.402 21:24:29 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:06.402 21:24:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:06.402 21:24:29 -- common/autotest_common.sh@10 -- # set +x 00:09:06.402 21:24:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:06.402 21:24:29 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:06.402 21:24:29 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:06.402 21:24:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:06.402 21:24:29 -- common/autotest_common.sh@10 -- # set +x 00:09:06.402 21:24:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:06.402 21:24:29 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.402 21:24:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:06.402 21:24:29 -- common/autotest_common.sh@10 -- # set +x 00:09:06.402 [2024-04-24 21:24:29.118385] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.402 21:24:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:06.402 21:24:29 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:06.402 21:24:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:06.402 21:24:29 -- common/autotest_common.sh@10 -- # set +x 00:09:06.402 21:24:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:06.402 21:24:29 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:06.402 21:24:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:06.402 21:24:29 -- common/autotest_common.sh@10 -- # set +x 00:09:06.402 21:24:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:06.402 21:24:29 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.776 21:24:30 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:07.776 21:24:30 -- common/autotest_common.sh@1184 -- # local i=0 00:09:07.776 21:24:30 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:07.776 21:24:30 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:07.776 21:24:30 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:09.676 21:24:32 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:09.676 21:24:32 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:09.676 21:24:32 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:09.676 21:24:32 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:09.676 21:24:32 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:09.676 21:24:32 -- common/autotest_common.sh@1194 -- # return 0 00:09:09.676 21:24:32 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:09.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.934 21:24:32 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:09.934 21:24:32 -- common/autotest_common.sh@1205 -- # local i=0 00:09:09.934 21:24:32 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:09.934 21:24:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.934 21:24:32 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:09.934 21:24:32 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.934 21:24:32 -- common/autotest_common.sh@1217 -- # return 0 00:09:09.934 21:24:32 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:09.934 21:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.934 21:24:32 -- common/autotest_common.sh@10 -- # set +x 00:09:09.934 21:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.934 21:24:32 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:09.934 21:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.934 21:24:32 -- common/autotest_common.sh@10 -- # set +x 00:09:09.934 21:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.934 21:24:32 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:09.934 21:24:32 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:09.934 21:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.934 21:24:32 -- common/autotest_common.sh@10 -- # set +x 00:09:09.934 21:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.934 21:24:32 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.934 21:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.934 21:24:32 -- common/autotest_common.sh@10 -- # set +x 00:09:09.934 [2024-04-24 21:24:32.675531] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.934 21:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.934 21:24:32 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:09.934 21:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.934 21:24:32 -- common/autotest_common.sh@10 -- # set +x 00:09:09.934 21:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.934 21:24:32 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:09.934 21:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.934 21:24:32 -- common/autotest_common.sh@10 -- # set +x 00:09:09.934 21:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.934 21:24:32 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:11.304 21:24:34 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:11.304 21:24:34 -- common/autotest_common.sh@1184 -- # local i=0 00:09:11.304 21:24:34 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.304 21:24:34 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:11.304 21:24:34 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:13.200 21:24:36 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:13.200 21:24:36 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:13.200 21:24:36 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.459 21:24:36 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:13.459 21:24:36 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.459 21:24:36 -- common/autotest_common.sh@1194 -- # return 0 00:09:13.459 21:24:36 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:13.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.459 21:24:36 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:13.459 21:24:36 -- common/autotest_common.sh@1205 -- # local i=0 00:09:13.459 21:24:36 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:13.459 21:24:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.459 21:24:36 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:13.459 21:24:36 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.459 21:24:36 -- common/autotest_common.sh@1217 -- # return 0 00:09:13.459 21:24:36 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:13.459 21:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.459 21:24:36 -- common/autotest_common.sh@10 -- # set +x 00:09:13.459 21:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.459 21:24:36 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:13.459 21:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.459 21:24:36 -- common/autotest_common.sh@10 -- # set +x 00:09:13.459 21:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.459 21:24:36 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:13.459 21:24:36 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:13.459 21:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.459 21:24:36 -- common/autotest_common.sh@10 -- # set +x 00:09:13.459 21:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.459 21:24:36 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.459 21:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.459 21:24:36 -- common/autotest_common.sh@10 -- # set +x 00:09:13.459 [2024-04-24 21:24:36.238304] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.459 21:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.459 21:24:36 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:13.459 21:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.459 21:24:36 -- common/autotest_common.sh@10 -- # set +x 00:09:13.459 21:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.459 21:24:36 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:13.459 21:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.459 21:24:36 -- common/autotest_common.sh@10 -- # set +x 00:09:13.459 21:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.459 21:24:36 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.832 21:24:37 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:14.832 21:24:37 -- common/autotest_common.sh@1184 -- # local i=0 00:09:14.832 21:24:37 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:14.832 21:24:37 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:14.832 21:24:37 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:16.739 21:24:39 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:16.739 21:24:39 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:16.739 21:24:39 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:16.739 21:24:39 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:16.739 21:24:39 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:16.739 21:24:39 -- common/autotest_common.sh@1194 -- # return 0 00:09:16.739 21:24:39 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:16.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.997 21:24:39 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:16.997 21:24:39 -- common/autotest_common.sh@1205 -- # local i=0 00:09:16.997 21:24:39 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:16.997 21:24:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.997 21:24:39 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:16.997 21:24:39 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.997 21:24:39 -- common/autotest_common.sh@1217 -- # return 0 00:09:16.997 21:24:39 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.997 21:24:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.997 21:24:39 -- common/autotest_common.sh@10 -- # set +x 00:09:16.997 21:24:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.997 21:24:39 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:16.997 21:24:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.997 21:24:39 -- common/autotest_common.sh@10 -- # set +x 00:09:16.997 21:24:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.997 21:24:39 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:16.997 21:24:39 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:16.997 21:24:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.997 21:24:39 -- common/autotest_common.sh@10 -- # set +x 00:09:16.997 21:24:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.997 21:24:39 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.997 21:24:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.997 21:24:39 -- common/autotest_common.sh@10 -- # set +x 00:09:16.997 [2024-04-24 21:24:39.748125] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.997 21:24:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.997 21:24:39 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:16.997 21:24:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.997 21:24:39 -- common/autotest_common.sh@10 -- # set +x 00:09:16.997 21:24:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.997 21:24:39 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:16.997 21:24:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.997 21:24:39 -- common/autotest_common.sh@10 -- # set +x 00:09:16.997 21:24:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.997 21:24:39 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:18.378 21:24:41 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:18.378 21:24:41 -- common/autotest_common.sh@1184 -- # local i=0 00:09:18.378 21:24:41 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.378 21:24:41 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:18.378 21:24:41 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:20.278 21:24:43 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:20.278 21:24:43 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:20.278 21:24:43 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.278 21:24:43 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:20.278 21:24:43 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.278 21:24:43 -- common/autotest_common.sh@1194 -- # return 0 00:09:20.536 21:24:43 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.536 21:24:43 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.536 21:24:43 -- common/autotest_common.sh@1205 -- # local i=0 00:09:20.536 21:24:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:20.536 21:24:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.536 21:24:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:20.536 21:24:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.536 21:24:43 -- common/autotest_common.sh@1217 -- # return 0 00:09:20.536 21:24:43 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.536 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.536 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.536 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.536 21:24:43 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.536 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.536 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.536 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.536 21:24:43 -- target/rpc.sh@99 -- # seq 1 5 00:09:20.536 21:24:43 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:20.536 21:24:43 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.536 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.536 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.536 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.536 21:24:43 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.536 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.537 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 [2024-04-24 21:24:43.314431] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.537 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.537 21:24:43 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.537 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.537 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.537 21:24:43 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.537 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.537 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.537 21:24:43 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.537 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.537 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.537 21:24:43 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.537 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.537 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.537 21:24:43 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:20.537 21:24:43 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.537 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.537 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.537 21:24:43 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.537 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.537 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 [2024-04-24 21:24:43.362554] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.537 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.537 21:24:43 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.537 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.537 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.537 21:24:43 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.537 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.537 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.537 21:24:43 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.537 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.537 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.537 21:24:43 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.537 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.537 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.537 21:24:43 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:20.537 21:24:43 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.537 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.537 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.537 21:24:43 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.537 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.537 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.537 [2024-04-24 21:24:43.410676] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.537 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.537 21:24:43 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.537 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.537 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.795 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.795 21:24:43 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.795 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.795 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.795 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.795 21:24:43 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.795 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.795 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.795 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.795 21:24:43 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.795 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.796 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.796 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.796 21:24:43 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:20.796 21:24:43 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.796 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.796 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.796 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.796 21:24:43 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.796 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.796 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.796 [2024-04-24 21:24:43.462856] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.796 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.796 21:24:43 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.796 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.796 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.796 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.796 21:24:43 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.796 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.796 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.796 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.796 21:24:43 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.796 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.796 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.796 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.796 21:24:43 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.796 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.796 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.796 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.796 21:24:43 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:20.796 21:24:43 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.796 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.796 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.796 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.796 21:24:43 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.796 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.796 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.796 [2024-04-24 21:24:43.511025] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.796 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.796 21:24:43 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.796 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.796 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.796 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.796 21:24:43 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.796 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.796 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.796 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.796 21:24:43 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.796 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.796 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.796 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.796 21:24:43 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.796 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.796 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.796 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.796 21:24:43 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:20.796 21:24:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:20.796 21:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:20.796 21:24:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:20.796 21:24:43 -- target/rpc.sh@110 -- # stats='{ 00:09:20.796 "tick_rate": 2500000000, 00:09:20.796 "poll_groups": [ 00:09:20.796 { 00:09:20.796 "name": "nvmf_tgt_poll_group_0", 00:09:20.796 "admin_qpairs": 2, 00:09:20.796 "io_qpairs": 196, 00:09:20.796 "current_admin_qpairs": 0, 00:09:20.796 "current_io_qpairs": 0, 00:09:20.796 "pending_bdev_io": 0, 00:09:20.796 "completed_nvme_io": 201, 00:09:20.796 "transports": [ 00:09:20.796 { 00:09:20.796 "trtype": "TCP" 00:09:20.796 } 00:09:20.796 ] 00:09:20.796 }, 00:09:20.796 { 00:09:20.796 "name": "nvmf_tgt_poll_group_1", 00:09:20.796 "admin_qpairs": 2, 00:09:20.796 "io_qpairs": 196, 00:09:20.796 "current_admin_qpairs": 0, 00:09:20.796 "current_io_qpairs": 0, 00:09:20.796 "pending_bdev_io": 0, 00:09:20.796 "completed_nvme_io": 292, 00:09:20.796 "transports": [ 00:09:20.796 { 00:09:20.796 "trtype": "TCP" 00:09:20.796 } 00:09:20.796 ] 00:09:20.796 }, 00:09:20.796 { 00:09:20.796 "name": "nvmf_tgt_poll_group_2", 00:09:20.796 "admin_qpairs": 1, 00:09:20.796 "io_qpairs": 196, 00:09:20.796 "current_admin_qpairs": 0, 00:09:20.796 "current_io_qpairs": 0, 00:09:20.796 "pending_bdev_io": 0, 00:09:20.796 "completed_nvme_io": 345, 00:09:20.796 "transports": [ 00:09:20.796 { 00:09:20.796 "trtype": "TCP" 00:09:20.796 } 00:09:20.796 ] 00:09:20.796 }, 00:09:20.796 { 00:09:20.796 "name": "nvmf_tgt_poll_group_3", 00:09:20.796 "admin_qpairs": 2, 00:09:20.796 "io_qpairs": 196, 00:09:20.796 "current_admin_qpairs": 0, 00:09:20.796 "current_io_qpairs": 0, 00:09:20.796 "pending_bdev_io": 0, 00:09:20.796 "completed_nvme_io": 296, 00:09:20.796 "transports": [ 00:09:20.796 { 00:09:20.796 "trtype": "TCP" 00:09:20.796 } 00:09:20.796 ] 00:09:20.796 } 00:09:20.796 ] 00:09:20.796 }' 00:09:20.796 21:24:43 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:20.796 21:24:43 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:20.796 21:24:43 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:20.796 21:24:43 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:20.796 21:24:43 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:20.796 21:24:43 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:20.796 21:24:43 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:20.796 21:24:43 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:20.796 21:24:43 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:20.796 21:24:43 -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:09:20.796 21:24:43 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:20.796 21:24:43 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:20.796 21:24:43 -- target/rpc.sh@123 -- # nvmftestfini 00:09:20.796 21:24:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:20.796 21:24:43 -- nvmf/common.sh@117 -- # sync 00:09:20.796 21:24:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:20.796 21:24:43 -- nvmf/common.sh@120 -- # set +e 00:09:20.796 21:24:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:20.796 21:24:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:20.796 rmmod nvme_tcp 00:09:21.055 rmmod nvme_fabrics 00:09:21.055 rmmod nvme_keyring 00:09:21.055 21:24:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:21.055 21:24:43 -- nvmf/common.sh@124 -- # set -e 00:09:21.055 21:24:43 -- nvmf/common.sh@125 -- # return 0 00:09:21.055 21:24:43 -- nvmf/common.sh@478 -- # '[' -n 2736607 ']' 00:09:21.055 21:24:43 -- nvmf/common.sh@479 -- # killprocess 2736607 00:09:21.055 21:24:43 -- common/autotest_common.sh@936 -- # '[' -z 2736607 ']' 00:09:21.055 21:24:43 -- common/autotest_common.sh@940 -- # kill -0 2736607 00:09:21.055 21:24:43 -- common/autotest_common.sh@941 -- # uname 00:09:21.055 21:24:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:21.055 21:24:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2736607 00:09:21.055 21:24:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:21.055 21:24:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:21.055 21:24:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2736607' 00:09:21.055 killing process with pid 2736607 00:09:21.055 21:24:43 -- common/autotest_common.sh@955 -- # kill 2736607 00:09:21.055 21:24:43 -- common/autotest_common.sh@960 -- # wait 2736607 00:09:21.314 21:24:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:21.314 21:24:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:21.314 21:24:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:21.314 21:24:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:21.314 21:24:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:21.314 21:24:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.314 21:24:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:21.314 21:24:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.845 21:24:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:23.845 00:09:23.845 real 0m35.639s 00:09:23.845 user 1m46.884s 00:09:23.845 sys 0m8.130s 00:09:23.845 21:24:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:23.845 21:24:46 -- common/autotest_common.sh@10 -- # set +x 00:09:23.845 ************************************ 00:09:23.845 END TEST nvmf_rpc 00:09:23.845 ************************************ 00:09:23.845 21:24:46 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:23.845 21:24:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:23.845 21:24:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.845 21:24:46 -- common/autotest_common.sh@10 -- # set +x 00:09:23.845 ************************************ 00:09:23.845 START TEST nvmf_invalid 00:09:23.845 ************************************ 00:09:23.845 21:24:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:23.845 * Looking for test storage... 00:09:23.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.845 21:24:46 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:23.845 21:24:46 -- nvmf/common.sh@7 -- # uname -s 00:09:23.845 21:24:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.845 21:24:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.845 21:24:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.845 21:24:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.845 21:24:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.845 21:24:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.845 21:24:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.845 21:24:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.845 21:24:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.845 21:24:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.845 21:24:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:23.845 21:24:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:23.845 21:24:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.845 21:24:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.845 21:24:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:23.845 21:24:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.845 21:24:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:23.845 21:24:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.845 21:24:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.845 21:24:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.845 21:24:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.845 21:24:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.845 21:24:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.845 21:24:46 -- paths/export.sh@5 -- # export PATH 00:09:23.845 21:24:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.845 21:24:46 -- nvmf/common.sh@47 -- # : 0 00:09:23.845 21:24:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:23.845 21:24:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:23.845 21:24:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.845 21:24:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.845 21:24:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.845 21:24:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:23.845 21:24:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:23.845 21:24:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:23.845 21:24:46 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:23.845 21:24:46 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.845 21:24:46 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:23.845 21:24:46 -- target/invalid.sh@14 -- # target=foobar 00:09:23.845 21:24:46 -- target/invalid.sh@16 -- # RANDOM=0 00:09:23.845 21:24:46 -- target/invalid.sh@34 -- # nvmftestinit 00:09:23.845 21:24:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:23.845 21:24:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.845 21:24:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:23.845 21:24:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:23.845 21:24:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:23.845 21:24:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.845 21:24:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:23.845 21:24:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.845 21:24:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:23.845 21:24:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:23.845 21:24:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:23.845 21:24:46 -- common/autotest_common.sh@10 -- # set +x 00:09:30.422 21:24:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:30.422 21:24:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:30.422 21:24:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:30.422 21:24:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:30.422 21:24:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:30.422 21:24:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:30.422 21:24:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:30.422 21:24:52 -- nvmf/common.sh@295 -- # net_devs=() 00:09:30.422 21:24:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:30.422 21:24:52 -- nvmf/common.sh@296 -- # e810=() 00:09:30.422 21:24:52 -- nvmf/common.sh@296 -- # local -ga e810 00:09:30.423 21:24:52 -- nvmf/common.sh@297 -- # x722=() 00:09:30.423 21:24:52 -- nvmf/common.sh@297 -- # local -ga x722 00:09:30.423 21:24:52 -- nvmf/common.sh@298 -- # mlx=() 00:09:30.423 21:24:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:30.423 21:24:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.423 21:24:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.423 21:24:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.423 21:24:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.423 21:24:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.423 21:24:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.423 21:24:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.423 21:24:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.423 21:24:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.423 21:24:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.423 21:24:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.423 21:24:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:30.423 21:24:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:30.423 21:24:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:30.423 21:24:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:30.423 21:24:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:30.423 21:24:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:30.423 21:24:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.423 21:24:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:30.423 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:30.423 21:24:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.423 21:24:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.423 21:24:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.423 21:24:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.423 21:24:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.423 21:24:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.423 21:24:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:30.423 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:30.423 21:24:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.423 21:24:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.423 21:24:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.423 21:24:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.423 21:24:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.423 21:24:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:30.423 21:24:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:30.423 21:24:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:30.423 21:24:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.423 21:24:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.423 21:24:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:30.423 21:24:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.423 21:24:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:30.423 Found net devices under 0000:af:00.0: cvl_0_0 00:09:30.423 21:24:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.423 21:24:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.423 21:24:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.423 21:24:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:30.423 21:24:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.423 21:24:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:30.423 Found net devices under 0000:af:00.1: cvl_0_1 00:09:30.423 21:24:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.423 21:24:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:30.423 21:24:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:30.423 21:24:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:30.423 21:24:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:30.423 21:24:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:30.423 21:24:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.423 21:24:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.423 21:24:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.423 21:24:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:30.423 21:24:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.423 21:24:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.423 21:24:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:30.423 21:24:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.423 21:24:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.423 21:24:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:30.423 21:24:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:30.423 21:24:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.423 21:24:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.423 21:24:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.423 21:24:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.423 21:24:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:30.423 21:24:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.423 21:24:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.423 21:24:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.423 21:24:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:30.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:09:30.423 00:09:30.423 --- 10.0.0.2 ping statistics --- 00:09:30.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.423 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:09:30.423 21:24:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:09:30.423 00:09:30.423 --- 10.0.0.1 ping statistics --- 00:09:30.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.423 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:09:30.423 21:24:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.423 21:24:53 -- nvmf/common.sh@411 -- # return 0 00:09:30.423 21:24:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:30.423 21:24:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.423 21:24:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:30.423 21:24:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:30.423 21:24:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.423 21:24:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:30.423 21:24:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:30.423 21:24:53 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:30.423 21:24:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:30.423 21:24:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:30.423 21:24:53 -- common/autotest_common.sh@10 -- # set +x 00:09:30.423 21:24:53 -- nvmf/common.sh@470 -- # nvmfpid=2745025 00:09:30.423 21:24:53 -- nvmf/common.sh@471 -- # waitforlisten 2745025 00:09:30.423 21:24:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:30.423 21:24:53 -- common/autotest_common.sh@817 -- # '[' -z 2745025 ']' 00:09:30.423 21:24:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.423 21:24:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:30.423 21:24:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.423 21:24:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:30.423 21:24:53 -- common/autotest_common.sh@10 -- # set +x 00:09:30.423 [2024-04-24 21:24:53.173506] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:09:30.423 [2024-04-24 21:24:53.173553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.423 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.423 [2024-04-24 21:24:53.247793] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.683 [2024-04-24 21:24:53.327188] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.683 [2024-04-24 21:24:53.327221] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.683 [2024-04-24 21:24:53.327230] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.683 [2024-04-24 21:24:53.327239] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.683 [2024-04-24 21:24:53.327247] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.683 [2024-04-24 21:24:53.327287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.683 [2024-04-24 21:24:53.327304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.683 [2024-04-24 21:24:53.327410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.683 [2024-04-24 21:24:53.327411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.282 21:24:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:31.282 21:24:53 -- common/autotest_common.sh@850 -- # return 0 00:09:31.282 21:24:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:31.282 21:24:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:31.282 21:24:53 -- common/autotest_common.sh@10 -- # set +x 00:09:31.282 21:24:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.282 21:24:54 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:31.282 21:24:54 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2688 00:09:31.542 [2024-04-24 21:24:54.185694] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:31.542 21:24:54 -- target/invalid.sh@40 -- # out='request: 00:09:31.542 { 00:09:31.542 "nqn": "nqn.2016-06.io.spdk:cnode2688", 00:09:31.542 "tgt_name": "foobar", 00:09:31.542 "method": "nvmf_create_subsystem", 00:09:31.542 "req_id": 1 00:09:31.542 } 00:09:31.542 Got JSON-RPC error response 00:09:31.542 response: 00:09:31.542 { 00:09:31.542 "code": -32603, 00:09:31.542 "message": "Unable to find target foobar" 00:09:31.542 }' 00:09:31.542 21:24:54 -- target/invalid.sh@41 -- # [[ request: 00:09:31.542 { 00:09:31.542 "nqn": "nqn.2016-06.io.spdk:cnode2688", 00:09:31.542 "tgt_name": "foobar", 00:09:31.542 "method": "nvmf_create_subsystem", 00:09:31.542 "req_id": 1 00:09:31.542 } 00:09:31.542 Got JSON-RPC error response 00:09:31.542 response: 00:09:31.542 { 00:09:31.542 "code": -32603, 00:09:31.542 "message": "Unable to find target foobar" 00:09:31.542 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:31.542 21:24:54 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:31.542 21:24:54 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9170 00:09:31.542 [2024-04-24 21:24:54.370392] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9170: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:31.542 21:24:54 -- target/invalid.sh@45 -- # out='request: 00:09:31.542 { 00:09:31.542 "nqn": "nqn.2016-06.io.spdk:cnode9170", 00:09:31.542 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:31.542 "method": "nvmf_create_subsystem", 00:09:31.542 "req_id": 1 00:09:31.542 } 00:09:31.542 Got JSON-RPC error response 00:09:31.542 response: 00:09:31.542 { 00:09:31.542 "code": -32602, 00:09:31.542 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:31.542 }' 00:09:31.542 21:24:54 -- target/invalid.sh@46 -- # [[ request: 00:09:31.542 { 00:09:31.542 "nqn": "nqn.2016-06.io.spdk:cnode9170", 00:09:31.542 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:31.542 "method": "nvmf_create_subsystem", 00:09:31.542 "req_id": 1 00:09:31.542 } 00:09:31.542 Got JSON-RPC error response 00:09:31.542 response: 00:09:31.542 { 00:09:31.542 "code": -32602, 00:09:31.542 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:31.542 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:31.542 21:24:54 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:31.542 21:24:54 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14399 00:09:31.801 [2024-04-24 21:24:54.542905] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14399: invalid model number 'SPDK_Controller' 00:09:31.801 21:24:54 -- target/invalid.sh@50 -- # out='request: 00:09:31.801 { 00:09:31.802 "nqn": "nqn.2016-06.io.spdk:cnode14399", 00:09:31.802 "model_number": "SPDK_Controller\u001f", 00:09:31.802 "method": "nvmf_create_subsystem", 00:09:31.802 "req_id": 1 00:09:31.802 } 00:09:31.802 Got JSON-RPC error response 00:09:31.802 response: 00:09:31.802 { 00:09:31.802 "code": -32602, 00:09:31.802 "message": "Invalid MN SPDK_Controller\u001f" 00:09:31.802 }' 00:09:31.802 21:24:54 -- target/invalid.sh@51 -- # [[ request: 00:09:31.802 { 00:09:31.802 "nqn": "nqn.2016-06.io.spdk:cnode14399", 00:09:31.802 "model_number": "SPDK_Controller\u001f", 00:09:31.802 "method": "nvmf_create_subsystem", 00:09:31.802 "req_id": 1 00:09:31.802 } 00:09:31.802 Got JSON-RPC error response 00:09:31.802 response: 00:09:31.802 { 00:09:31.802 "code": -32602, 00:09:31.802 "message": "Invalid MN SPDK_Controller\u001f" 00:09:31.802 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:31.802 21:24:54 -- target/invalid.sh@54 -- # gen_random_s 21 00:09:31.802 21:24:54 -- target/invalid.sh@19 -- # local length=21 ll 00:09:31.802 21:24:54 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:31.802 21:24:54 -- target/invalid.sh@21 -- # local chars 00:09:31.802 21:24:54 -- target/invalid.sh@22 -- # local string 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # printf %x 108 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # string+=l 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # printf %x 61 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # string+== 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # printf %x 100 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # string+=d 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # printf %x 120 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # string+=x 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # printf %x 47 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # string+=/ 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # printf %x 47 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # string+=/ 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # printf %x 91 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # string+='[' 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # printf %x 88 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # string+=X 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # printf %x 91 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # string+='[' 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # printf %x 109 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # string+=m 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # printf %x 55 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # string+=7 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # printf %x 103 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # string+=g 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # printf %x 68 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # string+=D 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # printf %x 114 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # string+=r 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.802 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.802 21:24:54 -- target/invalid.sh@25 -- # printf %x 119 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # string+=w 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # printf %x 52 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # string+=4 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # printf %x 110 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # string+=n 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # printf %x 41 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # string+=')' 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # printf %x 96 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # string+='`' 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # printf %x 63 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # string+='?' 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # printf %x 53 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # string+=5 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.062 21:24:54 -- target/invalid.sh@28 -- # [[ l == \- ]] 00:09:32.062 21:24:54 -- target/invalid.sh@31 -- # echo 'l=dx//[X[m7gDrw4n)`?5' 00:09:32.062 21:24:54 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'l=dx//[X[m7gDrw4n)`?5' nqn.2016-06.io.spdk:cnode5774 00:09:32.062 [2024-04-24 21:24:54.896086] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5774: invalid serial number 'l=dx//[X[m7gDrw4n)`?5' 00:09:32.062 21:24:54 -- target/invalid.sh@54 -- # out='request: 00:09:32.062 { 00:09:32.062 "nqn": "nqn.2016-06.io.spdk:cnode5774", 00:09:32.062 "serial_number": "l=dx//[X[m7gDrw4n)`?5", 00:09:32.062 "method": "nvmf_create_subsystem", 00:09:32.062 "req_id": 1 00:09:32.062 } 00:09:32.062 Got JSON-RPC error response 00:09:32.062 response: 00:09:32.062 { 00:09:32.062 "code": -32602, 00:09:32.062 "message": "Invalid SN l=dx//[X[m7gDrw4n)`?5" 00:09:32.062 }' 00:09:32.062 21:24:54 -- target/invalid.sh@55 -- # [[ request: 00:09:32.062 { 00:09:32.062 "nqn": "nqn.2016-06.io.spdk:cnode5774", 00:09:32.062 "serial_number": "l=dx//[X[m7gDrw4n)`?5", 00:09:32.062 "method": "nvmf_create_subsystem", 00:09:32.062 "req_id": 1 00:09:32.062 } 00:09:32.062 Got JSON-RPC error response 00:09:32.062 response: 00:09:32.062 { 00:09:32.062 "code": -32602, 00:09:32.062 "message": "Invalid SN l=dx//[X[m7gDrw4n)`?5" 00:09:32.062 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:32.062 21:24:54 -- target/invalid.sh@58 -- # gen_random_s 41 00:09:32.062 21:24:54 -- target/invalid.sh@19 -- # local length=41 ll 00:09:32.062 21:24:54 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:32.062 21:24:54 -- target/invalid.sh@21 -- # local chars 00:09:32.062 21:24:54 -- target/invalid.sh@22 -- # local string 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # printf %x 121 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # string+=y 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # printf %x 95 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:32.062 21:24:54 -- target/invalid.sh@25 -- # string+=_ 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.062 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # printf %x 97 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # string+=a 00:09:32.322 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # printf %x 63 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # string+='?' 00:09:32.322 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # printf %x 106 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # string+=j 00:09:32.322 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # printf %x 63 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # string+='?' 00:09:32.322 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # printf %x 41 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # string+=')' 00:09:32.322 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # printf %x 71 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # string+=G 00:09:32.322 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # printf %x 90 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:32.322 21:24:54 -- target/invalid.sh@25 -- # string+=Z 00:09:32.322 21:24:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # printf %x 40 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # string+='(' 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # printf %x 71 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # string+=G 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # printf %x 93 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # string+=']' 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # printf %x 105 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # string+=i 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # printf %x 43 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # string+=+ 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # printf %x 68 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # string+=D 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # printf %x 71 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # string+=G 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # printf %x 33 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # string+='!' 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # printf %x 76 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # string+=L 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # printf %x 40 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # string+='(' 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # printf %x 85 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # string+=U 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # printf %x 60 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # string+='<' 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # printf %x 35 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:32.322 21:24:55 -- target/invalid.sh@25 -- # string+='#' 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.322 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # printf %x 50 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # string+=2 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # printf %x 42 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # string+='*' 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # printf %x 63 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # string+='?' 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # printf %x 39 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # string+=\' 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # printf %x 92 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # string+='\' 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # printf %x 34 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # string+='"' 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # printf %x 34 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # string+='"' 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # printf %x 114 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # string+=r 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # printf %x 46 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # string+=. 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # printf %x 38 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # string+='&' 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # printf %x 122 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # string+=z 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # printf %x 59 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # string+=';' 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # printf %x 111 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # string+=o 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # printf %x 61 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # string+== 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.323 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # printf %x 52 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:32.323 21:24:55 -- target/invalid.sh@25 -- # string+=4 00:09:32.582 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.582 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.582 21:24:55 -- target/invalid.sh@25 -- # printf %x 89 00:09:32.582 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:32.582 21:24:55 -- target/invalid.sh@25 -- # string+=Y 00:09:32.582 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.582 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.582 21:24:55 -- target/invalid.sh@25 -- # printf %x 62 00:09:32.582 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:32.582 21:24:55 -- target/invalid.sh@25 -- # string+='>' 00:09:32.582 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.582 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.582 21:24:55 -- target/invalid.sh@25 -- # printf %x 53 00:09:32.582 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:32.582 21:24:55 -- target/invalid.sh@25 -- # string+=5 00:09:32.582 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.582 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.582 21:24:55 -- target/invalid.sh@25 -- # printf %x 92 00:09:32.582 21:24:55 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:32.582 21:24:55 -- target/invalid.sh@25 -- # string+='\' 00:09:32.582 21:24:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:32.582 21:24:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:32.582 21:24:55 -- target/invalid.sh@28 -- # [[ y == \- ]] 00:09:32.582 21:24:55 -- target/invalid.sh@31 -- # echo 'y_a?j?)GZ(G]i+DG!L(U<#2*?'\''\""r.&z;o=4Y>5\' 00:09:32.582 21:24:55 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'y_a?j?)GZ(G]i+DG!L(U<#2*?'\''\""r.&z;o=4Y>5\' nqn.2016-06.io.spdk:cnode24496 00:09:32.583 [2024-04-24 21:24:55.397767] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24496: invalid model number 'y_a?j?)GZ(G]i+DG!L(U<#2*?'\""r.&z;o=4Y>5\' 00:09:32.583 21:24:55 -- target/invalid.sh@58 -- # out='request: 00:09:32.583 { 00:09:32.583 "nqn": "nqn.2016-06.io.spdk:cnode24496", 00:09:32.583 "model_number": "y_a?j?)GZ(G]i+DG!L(U<#2*?'\''\\\"\"r.&z;o=4Y>5\\", 00:09:32.583 "method": "nvmf_create_subsystem", 00:09:32.583 "req_id": 1 00:09:32.583 } 00:09:32.583 Got JSON-RPC error response 00:09:32.583 response: 00:09:32.583 { 00:09:32.583 "code": -32602, 00:09:32.583 "message": "Invalid MN y_a?j?)GZ(G]i+DG!L(U<#2*?'\''\\\"\"r.&z;o=4Y>5\\" 00:09:32.583 }' 00:09:32.583 21:24:55 -- target/invalid.sh@59 -- # [[ request: 00:09:32.583 { 00:09:32.583 "nqn": "nqn.2016-06.io.spdk:cnode24496", 00:09:32.583 "model_number": "y_a?j?)GZ(G]i+DG!L(U<#2*?'\\\"\"r.&z;o=4Y>5\\", 00:09:32.583 "method": "nvmf_create_subsystem", 00:09:32.583 "req_id": 1 00:09:32.583 } 00:09:32.583 Got JSON-RPC error response 00:09:32.583 response: 00:09:32.583 { 00:09:32.583 "code": -32602, 00:09:32.583 "message": "Invalid MN y_a?j?)GZ(G]i+DG!L(U<#2*?'\\\"\"r.&z;o=4Y>5\\" 00:09:32.583 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:32.583 21:24:55 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:32.841 [2024-04-24 21:24:55.574410] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.842 21:24:55 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:33.101 21:24:55 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:33.101 21:24:55 -- target/invalid.sh@67 -- # echo '' 00:09:33.101 21:24:55 -- target/invalid.sh@67 -- # head -n 1 00:09:33.101 21:24:55 -- target/invalid.sh@67 -- # IP= 00:09:33.101 21:24:55 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:33.101 [2024-04-24 21:24:55.943652] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:33.101 21:24:55 -- target/invalid.sh@69 -- # out='request: 00:09:33.101 { 00:09:33.101 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:33.101 "listen_address": { 00:09:33.101 "trtype": "tcp", 00:09:33.101 "traddr": "", 00:09:33.101 "trsvcid": "4421" 00:09:33.101 }, 00:09:33.101 "method": "nvmf_subsystem_remove_listener", 00:09:33.101 "req_id": 1 00:09:33.101 } 00:09:33.101 Got JSON-RPC error response 00:09:33.101 response: 00:09:33.101 { 00:09:33.101 "code": -32602, 00:09:33.101 "message": "Invalid parameters" 00:09:33.101 }' 00:09:33.101 21:24:55 -- target/invalid.sh@70 -- # [[ request: 00:09:33.101 { 00:09:33.101 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:33.101 "listen_address": { 00:09:33.101 "trtype": "tcp", 00:09:33.101 "traddr": "", 00:09:33.101 "trsvcid": "4421" 00:09:33.101 }, 00:09:33.101 "method": "nvmf_subsystem_remove_listener", 00:09:33.101 "req_id": 1 00:09:33.101 } 00:09:33.101 Got JSON-RPC error response 00:09:33.101 response: 00:09:33.101 { 00:09:33.101 "code": -32602, 00:09:33.101 "message": "Invalid parameters" 00:09:33.101 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:33.101 21:24:55 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19822 -i 0 00:09:33.360 [2024-04-24 21:24:56.124221] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19822: invalid cntlid range [0-65519] 00:09:33.360 21:24:56 -- target/invalid.sh@73 -- # out='request: 00:09:33.360 { 00:09:33.360 "nqn": "nqn.2016-06.io.spdk:cnode19822", 00:09:33.360 "min_cntlid": 0, 00:09:33.360 "method": "nvmf_create_subsystem", 00:09:33.360 "req_id": 1 00:09:33.360 } 00:09:33.360 Got JSON-RPC error response 00:09:33.360 response: 00:09:33.360 { 00:09:33.360 "code": -32602, 00:09:33.360 "message": "Invalid cntlid range [0-65519]" 00:09:33.360 }' 00:09:33.360 21:24:56 -- target/invalid.sh@74 -- # [[ request: 00:09:33.360 { 00:09:33.360 "nqn": "nqn.2016-06.io.spdk:cnode19822", 00:09:33.360 "min_cntlid": 0, 00:09:33.360 "method": "nvmf_create_subsystem", 00:09:33.360 "req_id": 1 00:09:33.360 } 00:09:33.360 Got JSON-RPC error response 00:09:33.360 response: 00:09:33.360 { 00:09:33.360 "code": -32602, 00:09:33.360 "message": "Invalid cntlid range [0-65519]" 00:09:33.360 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:33.360 21:24:56 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22930 -i 65520 00:09:33.619 [2024-04-24 21:24:56.312886] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22930: invalid cntlid range [65520-65519] 00:09:33.619 21:24:56 -- target/invalid.sh@75 -- # out='request: 00:09:33.619 { 00:09:33.619 "nqn": "nqn.2016-06.io.spdk:cnode22930", 00:09:33.619 "min_cntlid": 65520, 00:09:33.619 "method": "nvmf_create_subsystem", 00:09:33.619 "req_id": 1 00:09:33.619 } 00:09:33.619 Got JSON-RPC error response 00:09:33.619 response: 00:09:33.619 { 00:09:33.619 "code": -32602, 00:09:33.619 "message": "Invalid cntlid range [65520-65519]" 00:09:33.619 }' 00:09:33.619 21:24:56 -- target/invalid.sh@76 -- # [[ request: 00:09:33.619 { 00:09:33.619 "nqn": "nqn.2016-06.io.spdk:cnode22930", 00:09:33.619 "min_cntlid": 65520, 00:09:33.619 "method": "nvmf_create_subsystem", 00:09:33.619 "req_id": 1 00:09:33.619 } 00:09:33.619 Got JSON-RPC error response 00:09:33.619 response: 00:09:33.619 { 00:09:33.619 "code": -32602, 00:09:33.619 "message": "Invalid cntlid range [65520-65519]" 00:09:33.619 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:33.619 21:24:56 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6189 -I 0 00:09:33.619 [2024-04-24 21:24:56.497540] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6189: invalid cntlid range [1-0] 00:09:33.878 21:24:56 -- target/invalid.sh@77 -- # out='request: 00:09:33.878 { 00:09:33.878 "nqn": "nqn.2016-06.io.spdk:cnode6189", 00:09:33.878 "max_cntlid": 0, 00:09:33.878 "method": "nvmf_create_subsystem", 00:09:33.878 "req_id": 1 00:09:33.878 } 00:09:33.878 Got JSON-RPC error response 00:09:33.878 response: 00:09:33.878 { 00:09:33.878 "code": -32602, 00:09:33.878 "message": "Invalid cntlid range [1-0]" 00:09:33.878 }' 00:09:33.878 21:24:56 -- target/invalid.sh@78 -- # [[ request: 00:09:33.878 { 00:09:33.878 "nqn": "nqn.2016-06.io.spdk:cnode6189", 00:09:33.878 "max_cntlid": 0, 00:09:33.878 "method": "nvmf_create_subsystem", 00:09:33.878 "req_id": 1 00:09:33.878 } 00:09:33.878 Got JSON-RPC error response 00:09:33.878 response: 00:09:33.878 { 00:09:33.878 "code": -32602, 00:09:33.878 "message": "Invalid cntlid range [1-0]" 00:09:33.878 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:33.878 21:24:56 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10127 -I 65520 00:09:33.878 [2024-04-24 21:24:56.686127] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10127: invalid cntlid range [1-65520] 00:09:33.878 21:24:56 -- target/invalid.sh@79 -- # out='request: 00:09:33.878 { 00:09:33.878 "nqn": "nqn.2016-06.io.spdk:cnode10127", 00:09:33.878 "max_cntlid": 65520, 00:09:33.878 "method": "nvmf_create_subsystem", 00:09:33.878 "req_id": 1 00:09:33.878 } 00:09:33.878 Got JSON-RPC error response 00:09:33.878 response: 00:09:33.878 { 00:09:33.878 "code": -32602, 00:09:33.878 "message": "Invalid cntlid range [1-65520]" 00:09:33.878 }' 00:09:33.878 21:24:56 -- target/invalid.sh@80 -- # [[ request: 00:09:33.878 { 00:09:33.878 "nqn": "nqn.2016-06.io.spdk:cnode10127", 00:09:33.878 "max_cntlid": 65520, 00:09:33.878 "method": "nvmf_create_subsystem", 00:09:33.878 "req_id": 1 00:09:33.878 } 00:09:33.878 Got JSON-RPC error response 00:09:33.878 response: 00:09:33.878 { 00:09:33.878 "code": -32602, 00:09:33.878 "message": "Invalid cntlid range [1-65520]" 00:09:33.878 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:33.878 21:24:56 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27695 -i 6 -I 5 00:09:34.138 [2024-04-24 21:24:56.858716] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27695: invalid cntlid range [6-5] 00:09:34.138 21:24:56 -- target/invalid.sh@83 -- # out='request: 00:09:34.138 { 00:09:34.138 "nqn": "nqn.2016-06.io.spdk:cnode27695", 00:09:34.138 "min_cntlid": 6, 00:09:34.138 "max_cntlid": 5, 00:09:34.138 "method": "nvmf_create_subsystem", 00:09:34.138 "req_id": 1 00:09:34.138 } 00:09:34.138 Got JSON-RPC error response 00:09:34.138 response: 00:09:34.138 { 00:09:34.138 "code": -32602, 00:09:34.138 "message": "Invalid cntlid range [6-5]" 00:09:34.138 }' 00:09:34.138 21:24:56 -- target/invalid.sh@84 -- # [[ request: 00:09:34.138 { 00:09:34.138 "nqn": "nqn.2016-06.io.spdk:cnode27695", 00:09:34.138 "min_cntlid": 6, 00:09:34.138 "max_cntlid": 5, 00:09:34.138 "method": "nvmf_create_subsystem", 00:09:34.138 "req_id": 1 00:09:34.138 } 00:09:34.138 Got JSON-RPC error response 00:09:34.138 response: 00:09:34.138 { 00:09:34.138 "code": -32602, 00:09:34.138 "message": "Invalid cntlid range [6-5]" 00:09:34.138 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:34.138 21:24:56 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:34.138 21:24:56 -- target/invalid.sh@87 -- # out='request: 00:09:34.138 { 00:09:34.138 "name": "foobar", 00:09:34.138 "method": "nvmf_delete_target", 00:09:34.138 "req_id": 1 00:09:34.138 } 00:09:34.138 Got JSON-RPC error response 00:09:34.138 response: 00:09:34.138 { 00:09:34.138 "code": -32602, 00:09:34.138 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:34.138 }' 00:09:34.138 21:24:56 -- target/invalid.sh@88 -- # [[ request: 00:09:34.138 { 00:09:34.138 "name": "foobar", 00:09:34.138 "method": "nvmf_delete_target", 00:09:34.138 "req_id": 1 00:09:34.138 } 00:09:34.138 Got JSON-RPC error response 00:09:34.138 response: 00:09:34.138 { 00:09:34.138 "code": -32602, 00:09:34.138 "message": "The specified target doesn't exist, cannot delete it." 00:09:34.138 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:34.138 21:24:56 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:34.138 21:24:56 -- target/invalid.sh@91 -- # nvmftestfini 00:09:34.138 21:24:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:34.138 21:24:56 -- nvmf/common.sh@117 -- # sync 00:09:34.138 21:24:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:34.138 21:24:56 -- nvmf/common.sh@120 -- # set +e 00:09:34.138 21:24:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:34.138 21:24:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:34.138 rmmod nvme_tcp 00:09:34.138 rmmod nvme_fabrics 00:09:34.398 rmmod nvme_keyring 00:09:34.398 21:24:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:34.398 21:24:57 -- nvmf/common.sh@124 -- # set -e 00:09:34.398 21:24:57 -- nvmf/common.sh@125 -- # return 0 00:09:34.398 21:24:57 -- nvmf/common.sh@478 -- # '[' -n 2745025 ']' 00:09:34.398 21:24:57 -- nvmf/common.sh@479 -- # killprocess 2745025 00:09:34.398 21:24:57 -- common/autotest_common.sh@936 -- # '[' -z 2745025 ']' 00:09:34.398 21:24:57 -- common/autotest_common.sh@940 -- # kill -0 2745025 00:09:34.398 21:24:57 -- common/autotest_common.sh@941 -- # uname 00:09:34.398 21:24:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:34.398 21:24:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2745025 00:09:34.398 21:24:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:34.398 21:24:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:34.398 21:24:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2745025' 00:09:34.398 killing process with pid 2745025 00:09:34.398 21:24:57 -- common/autotest_common.sh@955 -- # kill 2745025 00:09:34.398 21:24:57 -- common/autotest_common.sh@960 -- # wait 2745025 00:09:34.657 21:24:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:34.657 21:24:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:34.657 21:24:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:34.657 21:24:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:34.657 21:24:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:34.657 21:24:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.657 21:24:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.657 21:24:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.561 21:24:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:36.561 00:09:36.561 real 0m13.076s 00:09:36.561 user 0m19.727s 00:09:36.561 sys 0m6.269s 00:09:36.561 21:24:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:36.561 21:24:59 -- common/autotest_common.sh@10 -- # set +x 00:09:36.561 ************************************ 00:09:36.561 END TEST nvmf_invalid 00:09:36.561 ************************************ 00:09:36.561 21:24:59 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:36.561 21:24:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:36.561 21:24:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.561 21:24:59 -- common/autotest_common.sh@10 -- # set +x 00:09:36.819 ************************************ 00:09:36.819 START TEST nvmf_abort 00:09:36.819 ************************************ 00:09:36.819 21:24:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:36.819 * Looking for test storage... 00:09:36.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.819 21:24:59 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.819 21:24:59 -- nvmf/common.sh@7 -- # uname -s 00:09:36.819 21:24:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.819 21:24:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.819 21:24:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.819 21:24:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.819 21:24:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.819 21:24:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.819 21:24:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.819 21:24:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.819 21:24:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.819 21:24:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.819 21:24:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:36.819 21:24:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:36.819 21:24:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.819 21:24:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.819 21:24:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.819 21:24:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.819 21:24:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.819 21:24:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.819 21:24:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.819 21:24:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.819 21:24:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.819 21:24:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.820 21:24:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.820 21:24:59 -- paths/export.sh@5 -- # export PATH 00:09:36.820 21:24:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.820 21:24:59 -- nvmf/common.sh@47 -- # : 0 00:09:36.820 21:24:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:36.820 21:24:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:36.820 21:24:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.820 21:24:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.820 21:24:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.820 21:24:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:36.820 21:24:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:36.820 21:24:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:36.820 21:24:59 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.820 21:24:59 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:36.820 21:24:59 -- target/abort.sh@14 -- # nvmftestinit 00:09:36.820 21:24:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:36.820 21:24:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.820 21:24:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:36.820 21:24:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:36.820 21:24:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:36.820 21:24:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.820 21:24:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.820 21:24:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.820 21:24:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:36.820 21:24:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:36.820 21:24:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:36.820 21:24:59 -- common/autotest_common.sh@10 -- # set +x 00:09:43.384 21:25:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:43.384 21:25:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:43.384 21:25:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:43.384 21:25:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:43.384 21:25:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:43.384 21:25:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:43.384 21:25:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:43.384 21:25:05 -- nvmf/common.sh@295 -- # net_devs=() 00:09:43.384 21:25:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:43.384 21:25:05 -- nvmf/common.sh@296 -- # e810=() 00:09:43.384 21:25:05 -- nvmf/common.sh@296 -- # local -ga e810 00:09:43.384 21:25:05 -- nvmf/common.sh@297 -- # x722=() 00:09:43.384 21:25:05 -- nvmf/common.sh@297 -- # local -ga x722 00:09:43.384 21:25:05 -- nvmf/common.sh@298 -- # mlx=() 00:09:43.384 21:25:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:43.384 21:25:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.384 21:25:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.384 21:25:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.384 21:25:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.384 21:25:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.384 21:25:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.384 21:25:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.384 21:25:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.384 21:25:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.384 21:25:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.384 21:25:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.384 21:25:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:43.384 21:25:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:43.385 21:25:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:43.385 21:25:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:43.385 21:25:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:43.385 21:25:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:43.385 21:25:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.385 21:25:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:43.385 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:43.385 21:25:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.385 21:25:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.385 21:25:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.385 21:25:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.385 21:25:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.385 21:25:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.385 21:25:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:43.385 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:43.385 21:25:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.385 21:25:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.385 21:25:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.385 21:25:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.385 21:25:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.385 21:25:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:43.385 21:25:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:43.385 21:25:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:43.385 21:25:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.385 21:25:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.385 21:25:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:43.385 21:25:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.385 21:25:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:43.385 Found net devices under 0000:af:00.0: cvl_0_0 00:09:43.385 21:25:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.385 21:25:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.385 21:25:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.385 21:25:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:43.385 21:25:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.385 21:25:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:43.385 Found net devices under 0000:af:00.1: cvl_0_1 00:09:43.385 21:25:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.385 21:25:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:43.385 21:25:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:43.385 21:25:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:43.385 21:25:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:43.385 21:25:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:43.385 21:25:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.385 21:25:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.385 21:25:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:43.385 21:25:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:43.385 21:25:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:43.385 21:25:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:43.385 21:25:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:43.385 21:25:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:43.385 21:25:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.385 21:25:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:43.385 21:25:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:43.385 21:25:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:43.385 21:25:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.385 21:25:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.385 21:25:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.385 21:25:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:43.385 21:25:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.385 21:25:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.385 21:25:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.385 21:25:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:43.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:09:43.385 00:09:43.385 --- 10.0.0.2 ping statistics --- 00:09:43.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.385 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:09:43.385 21:25:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:09:43.385 00:09:43.385 --- 10.0.0.1 ping statistics --- 00:09:43.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.385 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:09:43.385 21:25:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.385 21:25:06 -- nvmf/common.sh@411 -- # return 0 00:09:43.385 21:25:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:43.385 21:25:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.385 21:25:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:43.385 21:25:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:43.385 21:25:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.385 21:25:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:43.385 21:25:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:43.385 21:25:06 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:43.385 21:25:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:43.385 21:25:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:43.385 21:25:06 -- common/autotest_common.sh@10 -- # set +x 00:09:43.385 21:25:06 -- nvmf/common.sh@470 -- # nvmfpid=2749674 00:09:43.385 21:25:06 -- nvmf/common.sh@471 -- # waitforlisten 2749674 00:09:43.385 21:25:06 -- common/autotest_common.sh@817 -- # '[' -z 2749674 ']' 00:09:43.385 21:25:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.385 21:25:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:43.385 21:25:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.385 21:25:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:43.385 21:25:06 -- common/autotest_common.sh@10 -- # set +x 00:09:43.385 21:25:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:43.385 [2024-04-24 21:25:06.094276] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:09:43.385 [2024-04-24 21:25:06.094321] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.385 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.385 [2024-04-24 21:25:06.168065] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:43.385 [2024-04-24 21:25:06.239600] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.385 [2024-04-24 21:25:06.239660] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.385 [2024-04-24 21:25:06.239669] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.385 [2024-04-24 21:25:06.239677] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.385 [2024-04-24 21:25:06.239700] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.385 [2024-04-24 21:25:06.239799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.385 [2024-04-24 21:25:06.239891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.385 [2024-04-24 21:25:06.239892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.321 21:25:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:44.321 21:25:06 -- common/autotest_common.sh@850 -- # return 0 00:09:44.321 21:25:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:44.321 21:25:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:44.321 21:25:06 -- common/autotest_common.sh@10 -- # set +x 00:09:44.321 21:25:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.321 21:25:06 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:44.321 21:25:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.321 21:25:06 -- common/autotest_common.sh@10 -- # set +x 00:09:44.321 [2024-04-24 21:25:06.952447] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.321 21:25:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.321 21:25:06 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:44.321 21:25:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.321 21:25:06 -- common/autotest_common.sh@10 -- # set +x 00:09:44.321 Malloc0 00:09:44.321 21:25:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.321 21:25:06 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:44.321 21:25:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.321 21:25:06 -- common/autotest_common.sh@10 -- # set +x 00:09:44.321 Delay0 00:09:44.321 21:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.321 21:25:07 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:44.321 21:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.321 21:25:07 -- common/autotest_common.sh@10 -- # set +x 00:09:44.321 21:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.321 21:25:07 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:44.321 21:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.321 21:25:07 -- common/autotest_common.sh@10 -- # set +x 00:09:44.321 21:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.321 21:25:07 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:44.321 21:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.321 21:25:07 -- common/autotest_common.sh@10 -- # set +x 00:09:44.321 [2024-04-24 21:25:07.029474] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.321 21:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.321 21:25:07 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:44.321 21:25:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.321 21:25:07 -- common/autotest_common.sh@10 -- # set +x 00:09:44.321 21:25:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.321 21:25:07 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:44.321 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.321 [2024-04-24 21:25:07.148132] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:46.868 Initializing NVMe Controllers 00:09:46.868 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:46.868 controller IO queue size 128 less than required 00:09:46.868 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:46.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:46.868 Initialization complete. Launching workers. 00:09:46.868 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41754 00:09:46.868 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41815, failed to submit 62 00:09:46.868 success 41758, unsuccess 57, failed 0 00:09:46.868 21:25:09 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:46.868 21:25:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:46.868 21:25:09 -- common/autotest_common.sh@10 -- # set +x 00:09:46.868 21:25:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:46.868 21:25:09 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:46.868 21:25:09 -- target/abort.sh@38 -- # nvmftestfini 00:09:46.868 21:25:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:46.868 21:25:09 -- nvmf/common.sh@117 -- # sync 00:09:46.868 21:25:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:46.868 21:25:09 -- nvmf/common.sh@120 -- # set +e 00:09:46.868 21:25:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:46.868 21:25:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:46.868 rmmod nvme_tcp 00:09:46.868 rmmod nvme_fabrics 00:09:46.868 rmmod nvme_keyring 00:09:46.868 21:25:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:46.868 21:25:09 -- nvmf/common.sh@124 -- # set -e 00:09:46.868 21:25:09 -- nvmf/common.sh@125 -- # return 0 00:09:46.868 21:25:09 -- nvmf/common.sh@478 -- # '[' -n 2749674 ']' 00:09:46.868 21:25:09 -- nvmf/common.sh@479 -- # killprocess 2749674 00:09:46.868 21:25:09 -- common/autotest_common.sh@936 -- # '[' -z 2749674 ']' 00:09:46.868 21:25:09 -- common/autotest_common.sh@940 -- # kill -0 2749674 00:09:46.868 21:25:09 -- common/autotest_common.sh@941 -- # uname 00:09:46.868 21:25:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:46.868 21:25:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2749674 00:09:46.868 21:25:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:46.868 21:25:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:46.868 21:25:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2749674' 00:09:46.868 killing process with pid 2749674 00:09:46.868 21:25:09 -- common/autotest_common.sh@955 -- # kill 2749674 00:09:46.868 21:25:09 -- common/autotest_common.sh@960 -- # wait 2749674 00:09:46.868 21:25:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:46.868 21:25:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:46.868 21:25:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:46.868 21:25:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:46.868 21:25:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:46.868 21:25:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.868 21:25:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.868 21:25:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.770 21:25:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:48.770 00:09:48.770 real 0m12.073s 00:09:48.770 user 0m12.967s 00:09:48.770 sys 0m6.134s 00:09:48.770 21:25:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:48.770 21:25:11 -- common/autotest_common.sh@10 -- # set +x 00:09:48.770 ************************************ 00:09:48.770 END TEST nvmf_abort 00:09:48.770 ************************************ 00:09:49.028 21:25:11 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:49.028 21:25:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:49.028 21:25:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:49.028 21:25:11 -- common/autotest_common.sh@10 -- # set +x 00:09:49.028 ************************************ 00:09:49.028 START TEST nvmf_ns_hotplug_stress 00:09:49.028 ************************************ 00:09:49.028 21:25:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:49.028 * Looking for test storage... 00:09:49.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.028 21:25:11 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.028 21:25:11 -- nvmf/common.sh@7 -- # uname -s 00:09:49.028 21:25:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.028 21:25:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.028 21:25:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.028 21:25:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.028 21:25:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.028 21:25:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.028 21:25:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.028 21:25:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.028 21:25:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.028 21:25:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.287 21:25:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:49.287 21:25:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:49.287 21:25:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.287 21:25:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.287 21:25:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.287 21:25:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.287 21:25:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.287 21:25:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.287 21:25:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.287 21:25:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.287 21:25:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.287 21:25:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.287 21:25:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.287 21:25:11 -- paths/export.sh@5 -- # export PATH 00:09:49.287 21:25:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.287 21:25:11 -- nvmf/common.sh@47 -- # : 0 00:09:49.287 21:25:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.287 21:25:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.287 21:25:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.287 21:25:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.287 21:25:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.287 21:25:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.287 21:25:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.287 21:25:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.287 21:25:11 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:49.287 21:25:11 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:09:49.287 21:25:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:49.287 21:25:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.287 21:25:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:49.287 21:25:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:49.287 21:25:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:49.287 21:25:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.287 21:25:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.287 21:25:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.287 21:25:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:49.287 21:25:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:49.287 21:25:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:49.287 21:25:11 -- common/autotest_common.sh@10 -- # set +x 00:09:55.857 21:25:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:55.857 21:25:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:55.857 21:25:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:55.857 21:25:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:55.857 21:25:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:55.857 21:25:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:55.857 21:25:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:55.857 21:25:18 -- nvmf/common.sh@295 -- # net_devs=() 00:09:55.857 21:25:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:55.857 21:25:18 -- nvmf/common.sh@296 -- # e810=() 00:09:55.857 21:25:18 -- nvmf/common.sh@296 -- # local -ga e810 00:09:55.857 21:25:18 -- nvmf/common.sh@297 -- # x722=() 00:09:55.857 21:25:18 -- nvmf/common.sh@297 -- # local -ga x722 00:09:55.857 21:25:18 -- nvmf/common.sh@298 -- # mlx=() 00:09:55.857 21:25:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:55.857 21:25:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:55.857 21:25:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:55.857 21:25:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:55.857 21:25:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:55.857 21:25:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:55.857 21:25:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:55.857 21:25:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:55.857 21:25:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:55.857 21:25:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:55.857 21:25:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:55.857 21:25:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:55.857 21:25:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:55.857 21:25:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:55.857 21:25:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:55.857 21:25:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:55.857 21:25:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:55.857 21:25:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:55.857 21:25:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.857 21:25:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:55.857 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:55.857 21:25:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:55.857 21:25:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:55.857 21:25:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.857 21:25:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.857 21:25:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:55.857 21:25:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.857 21:25:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:55.857 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:55.857 21:25:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:55.857 21:25:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:55.857 21:25:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.857 21:25:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.857 21:25:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:55.857 21:25:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:55.857 21:25:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:55.857 21:25:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:55.857 21:25:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.857 21:25:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.857 21:25:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:55.857 21:25:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.857 21:25:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:55.857 Found net devices under 0000:af:00.0: cvl_0_0 00:09:55.857 21:25:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.857 21:25:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.857 21:25:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.857 21:25:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:55.857 21:25:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.857 21:25:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:55.857 Found net devices under 0000:af:00.1: cvl_0_1 00:09:55.857 21:25:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.857 21:25:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:55.857 21:25:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:55.857 21:25:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:55.857 21:25:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:55.857 21:25:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:55.857 21:25:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.857 21:25:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:55.858 21:25:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:55.858 21:25:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:55.858 21:25:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:55.858 21:25:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:55.858 21:25:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:55.858 21:25:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:55.858 21:25:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.858 21:25:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:55.858 21:25:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:55.858 21:25:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:55.858 21:25:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:55.858 21:25:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:55.858 21:25:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:55.858 21:25:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:55.858 21:25:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:55.858 21:25:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:55.858 21:25:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:55.858 21:25:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:55.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:09:55.858 00:09:55.858 --- 10.0.0.2 ping statistics --- 00:09:55.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.858 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:09:55.858 21:25:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:55.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:09:55.858 00:09:55.858 --- 10.0.0.1 ping statistics --- 00:09:55.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.858 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:09:55.858 21:25:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.858 21:25:18 -- nvmf/common.sh@411 -- # return 0 00:09:55.858 21:25:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:55.858 21:25:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.858 21:25:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:55.858 21:25:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:55.858 21:25:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.858 21:25:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:55.858 21:25:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:56.117 21:25:18 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:09:56.117 21:25:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:56.117 21:25:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:56.117 21:25:18 -- common/autotest_common.sh@10 -- # set +x 00:09:56.117 21:25:18 -- nvmf/common.sh@470 -- # nvmfpid=2753931 00:09:56.117 21:25:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:56.117 21:25:18 -- nvmf/common.sh@471 -- # waitforlisten 2753931 00:09:56.117 21:25:18 -- common/autotest_common.sh@817 -- # '[' -z 2753931 ']' 00:09:56.117 21:25:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.117 21:25:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:56.117 21:25:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.117 21:25:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:56.117 21:25:18 -- common/autotest_common.sh@10 -- # set +x 00:09:56.117 [2024-04-24 21:25:18.815679] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:09:56.117 [2024-04-24 21:25:18.815727] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.117 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.117 [2024-04-24 21:25:18.890552] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:56.117 [2024-04-24 21:25:18.960646] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.117 [2024-04-24 21:25:18.960683] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.117 [2024-04-24 21:25:18.960693] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.117 [2024-04-24 21:25:18.960702] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.117 [2024-04-24 21:25:18.960709] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.117 [2024-04-24 21:25:18.960815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.118 [2024-04-24 21:25:18.960832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.118 [2024-04-24 21:25:18.960833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.052 21:25:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:57.052 21:25:19 -- common/autotest_common.sh@850 -- # return 0 00:09:57.052 21:25:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:57.052 21:25:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:57.052 21:25:19 -- common/autotest_common.sh@10 -- # set +x 00:09:57.052 21:25:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.052 21:25:19 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:09:57.052 21:25:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:57.052 [2024-04-24 21:25:19.829586] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.052 21:25:19 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:57.310 21:25:20 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.310 [2024-04-24 21:25:20.199446] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.567 21:25:20 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:57.568 21:25:20 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:57.826 Malloc0 00:09:57.826 21:25:20 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:58.085 Delay0 00:09:58.085 21:25:20 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.085 21:25:20 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:58.342 NULL1 00:09:58.342 21:25:21 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:58.600 21:25:21 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=2754476 00:09:58.600 21:25:21 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:58.600 21:25:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:09:58.600 21:25:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.600 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.974 Read completed with error (sct=0, sc=11) 00:09:59.974 21:25:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:59.974 21:25:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:09:59.974 21:25:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:59.974 true 00:09:59.974 21:25:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:09:59.974 21:25:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.924 21:25:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.182 21:25:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:10:01.182 21:25:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:01.182 true 00:10:01.182 21:25:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:01.182 21:25:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.450 21:25:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.709 21:25:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:10:01.709 21:25:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:01.709 true 00:10:01.709 21:25:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:01.709 21:25:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.117 21:25:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.118 21:25:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:10:03.118 21:25:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:03.376 true 00:10:03.376 21:25:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:03.376 21:25:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.313 21:25:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.313 21:25:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:10:04.313 21:25:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:04.571 true 00:10:04.571 21:25:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:04.571 21:25:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.571 21:25:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.830 21:25:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:10:04.830 21:25:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:05.089 true 00:10:05.089 21:25:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:05.089 21:25:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.464 21:25:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.464 21:25:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:10:06.464 21:25:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:06.464 true 00:10:06.464 21:25:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:06.464 21:25:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.397 21:25:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.655 21:25:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:10:07.655 21:25:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:07.655 true 00:10:07.655 21:25:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:07.655 21:25:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.913 21:25:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.170 21:25:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:10:08.170 21:25:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:08.170 true 00:10:08.429 21:25:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:08.429 21:25:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.429 21:25:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.687 21:25:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:10:08.687 21:25:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:08.687 true 00:10:08.945 21:25:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:08.945 21:25:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.945 21:25:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.203 21:25:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:10:09.203 21:25:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:09.462 true 00:10:09.462 21:25:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:09.462 21:25:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.838 21:25:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.838 21:25:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:10:10.838 21:25:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:10.838 true 00:10:10.838 21:25:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:10.838 21:25:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.777 21:25:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.035 21:25:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:10:12.035 21:25:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:12.035 true 00:10:12.035 21:25:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:12.035 21:25:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.293 21:25:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.552 21:25:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:10:12.552 21:25:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:12.810 true 00:10:12.810 21:25:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:12.810 21:25:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.191 21:25:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.191 21:25:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:10:14.191 21:25:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:14.191 true 00:10:14.191 21:25:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:14.191 21:25:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.124 21:25:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.381 21:25:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:10:15.381 21:25:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:15.381 true 00:10:15.381 21:25:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:15.381 21:25:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.639 21:25:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.897 21:25:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:10:15.897 21:25:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:15.897 true 00:10:15.897 21:25:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:15.897 21:25:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.154 21:25:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.412 21:25:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:10:16.412 21:25:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:16.412 true 00:10:16.412 21:25:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:16.412 21:25:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.669 21:25:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.927 21:25:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:10:16.927 21:25:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:17.186 true 00:10:17.186 21:25:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:17.186 21:25:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.186 21:25:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.445 21:25:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:10:17.445 21:25:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:17.703 true 00:10:17.703 21:25:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:17.703 21:25:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.703 21:25:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.961 21:25:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:10:17.961 21:25:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:18.219 true 00:10:18.219 21:25:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:18.219 21:25:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.219 21:25:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.477 21:25:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:10:18.477 21:25:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:18.736 true 00:10:18.736 21:25:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:18.736 21:25:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.995 21:25:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.995 21:25:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:10:18.995 21:25:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:19.254 true 00:10:19.254 21:25:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:19.254 21:25:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.519 21:25:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.519 [2024-04-24 21:25:42.348533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.348621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.348666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.348704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.348750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.348784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.348829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.348870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.348918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.348961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.349013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.349054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.349102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.349142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.349187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.349229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.349263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.349307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.349349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.349392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.349432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.349483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.349524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.349564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.519 [2024-04-24 21:25:42.349606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.349647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.349689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.349728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.349773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.349822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.349866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.349909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.349958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.350991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.351035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.351081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.351121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.351162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.351201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.351241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.351284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.351334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.352199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.352253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.352304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.352353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.352399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.352448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.352500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.352546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.352592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.352638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.352693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.352737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.352778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.352827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.352873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.352921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.352980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.353991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.520 [2024-04-24 21:25:42.354838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.354885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.354939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.354985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.355032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.355078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.355264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.355314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.355374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.355415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.355463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.355508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.355564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.355613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.355657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.355701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.355739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.355787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.355832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.355875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.355909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.355951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.355994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.356964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.357956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.358008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.358049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.358569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.358619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.358661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.358704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.358749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.358794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.358839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.358890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.358945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.358988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.521 [2024-04-24 21:25:42.359939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.359982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.360984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.361025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.361069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.361110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.361151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.361191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.361234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.361274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.361306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.361351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.361403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.361448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.362994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.363978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.364025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.364077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.364122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.364175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.364222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.364274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.364321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.364365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.364414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.364465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.364512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.364556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.364608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.364655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.364701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.364746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.364789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.364836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.522 [2024-04-24 21:25:42.364889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.365385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.365430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.365478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.365514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.365563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.365606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.365646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.365692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.365734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.365777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.365824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.365864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.365908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.365953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.365999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.366992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.367974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.368016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.368058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.368098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.368143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.368187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.368672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.368725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.368765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.368808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.368849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.368891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.368938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.368986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.369956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.370002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.370044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.370079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.523 [2024-04-24 21:25:42.370126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.370993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.371037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.371081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.371127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.371177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.371221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.371270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.371319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.371366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.371410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.371460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.371954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.372960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.373962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.374009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.524 [2024-04-24 21:25:42.374053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.374097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.374141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.374189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.374234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.374291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.374343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.374394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.374445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.374503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.374549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.374593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.374643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.374690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.374738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.374788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.374836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.374882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.375064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.375417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.375467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.375511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.375556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.375593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.375633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.375672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.375717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.375761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.375801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.375845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.375890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.375933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.375978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.376932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 21:25:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:10:19.525 [2024-04-24 21:25:42.376980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 21:25:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:19.525 [2024-04-24 21:25:42.377348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.377989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.378040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.378089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.378142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.378187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.378237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.378741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.378788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.378835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.378882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.378931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.378985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.379032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.379080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.379129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.379180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.379229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.379279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.379323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.379373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.525 [2024-04-24 21:25:42.379424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.379473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.379522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.379571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.379616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.379659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.379709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.379762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.379811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.379858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.379904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.379946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.379985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.380979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.381022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.381074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.381113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.381162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.381205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.381247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.381293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.381343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.381393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.381442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.381494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.381544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.381592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.381641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.381686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.381879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.382489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.382541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.382583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.382623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.382674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.382721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.382753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.382800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.382839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.382870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.382919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.382965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.383979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.384023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.384074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.384121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.384163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.384209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.384252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.384296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.384342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.384387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.384429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.384481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.384515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.384558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.384597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.526 [2024-04-24 21:25:42.384643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.384687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.384731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.384775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.384824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.384868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.384912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.384955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.385002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.385050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.385096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.385140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.385184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.385236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.385285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.385529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.385583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.385630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.385680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.385730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.385786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.385831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.385881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.385928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.385979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.386983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.387969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.388009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.388046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.388087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.388129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.388172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.388225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.388266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.388310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.388355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.388402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.388601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.388960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.389000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.389037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.389084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.389125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.389171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.389218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.389260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.389302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.389339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.389379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.389423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.389471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.389515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.389557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.389589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.389630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.389670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.527 [2024-04-24 21:25:42.389719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.389752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.389783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.389816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.389847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.389878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.389909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.389939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.389974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.390972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.391019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.391068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.391115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.391163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.391211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.391261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.391313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.391357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.391404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.391460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.391511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.391555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.391602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.391646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.391700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.392198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.392253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.392292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.392345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.392388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.392440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.392495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.392540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.392586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.392628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.392672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.392711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.392753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.392796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.392836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.392882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.392936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.392980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.393021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.393068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.393112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.393161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.393212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.393262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.393312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.393364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.393413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.393472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.393515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.393562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.393614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.393663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.393708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.393756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.393802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.393854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.528 [2024-04-24 21:25:42.393904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.393952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.393998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.394996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.395036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.395084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.395138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.395181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.395355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.396079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.396123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.396170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.396217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.396270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.396317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.396365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.396416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.396472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.396517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.396565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.396621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.396667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.396721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.396772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.396820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.396867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.396917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.396962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.397984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.398930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.399147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.399203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:19.529 [2024-04-24 21:25:42.399253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.399300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.399347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.529 [2024-04-24 21:25:42.399392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.399441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.399499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.399548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.399596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.399641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.399692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.399738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.399784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.399834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.399883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.399928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.399976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.400977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.401994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.402038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.530 [2024-04-24 21:25:42.402087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.402279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.402943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.402992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.403968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.404027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.404072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.404122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.404170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.404218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.404274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.404324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.404371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.404421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.404474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.826 [2024-04-24 21:25:42.404527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.404572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.404622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.404668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.404715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.404763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.404809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.404856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.404908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.404952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.405002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.405048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.405101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.405152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.405200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.405246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.405299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.405348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.405394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.405444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.405502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.405548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.405594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.405637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.405679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.405720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.405758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.405797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.405981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.406025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.406069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.406114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.406160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.406196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.406240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.406288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.406328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.406365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.406408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.406461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.406504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.406554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.406596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.407969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.408967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.409012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.409059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.409101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.409167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.827 [2024-04-24 21:25:42.409212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.409258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.409304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.409353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.409399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.409442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.409503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.409546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.409584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.409634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.409676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.409717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.409760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.409802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.409847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.409889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.409930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.410081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.410114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.410160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.410206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.410247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.410277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.410307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.410337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.410368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.410400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.410430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.410469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.410510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.410554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.410596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.410638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.410682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.410730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.411210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.411257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.411305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.411358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.411413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.411457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.411497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.411539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.411589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.411632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.411678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.411725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.411772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.411822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.411872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.411920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.411965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.412997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.413040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.413086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.413132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.413165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.413207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.413249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.413293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.413340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.413388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.413425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.413473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.413514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.413560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.413604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.413638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.413693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.413746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.413799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.828 [2024-04-24 21:25:42.413844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.413892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.413942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.413990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.414036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.414088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.414145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.414188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.414382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.414429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.414479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.414524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.414579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.414628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.414678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.414726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.414774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.414825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.414869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.414915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.414968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.415015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.415062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.415602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.415651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.415693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.415735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.415777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.415820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.415860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.415902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.415944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.415986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.416973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.417980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.418026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.418072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.418114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.418158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.418202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.418247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.418285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.418326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.418371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.418417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.418620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.418671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.829 [2024-04-24 21:25:42.418719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.418764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.418812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.418864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.418915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.418970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.419995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.420973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.421027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.421067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.421111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.421152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.421193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.421236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.421281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.421321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.421363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.421407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.421457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.421499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.421992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.422983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.423027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.423072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.423111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.423151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.423194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.423240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.423279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.423320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.423366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.423406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.423459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.830 [2024-04-24 21:25:42.423507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.423548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.423595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.423648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.423696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.423746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.423799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.423846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.423893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.423947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.423993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.424937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.425407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.425455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.425498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.425540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.425579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.425622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.425660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.425706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.425747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.425789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.425838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.425881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.425931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.425976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.426024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.426069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.426119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.426172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.426219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.426268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.426316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.426366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.426414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.426468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.426516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.426561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.426608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.426657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.426705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.831 [2024-04-24 21:25:42.426752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.426799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.426847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.426893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.426934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.426985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.427995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.428042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.428090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.428137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.428193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.428243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.428288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.429972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.430982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.832 [2024-04-24 21:25:42.431962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.432004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.432199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.432247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.432291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.432344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.432382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.432428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.432478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.432524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.432564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.432608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.432654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.432697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.432737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.432774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.432816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.432858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.432912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.433395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.433447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.433505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.433552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.433595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.433640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.433688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.433732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.433779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.433828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.433870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.433916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.433968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.434973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.435996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.436048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.436092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.436136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.436183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.436363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.436410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.436459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.436504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.833 [2024-04-24 21:25:42.436552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.436610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.436656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.436702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.436751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.436798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.436847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.436898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.436949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.436993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.437968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.438001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.438043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.438082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.438126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.438166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.438211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.438254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.438298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.438346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.438386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.438860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.438912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.438958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.439982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.440956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.441007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.441061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.441106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.441155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.441204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.441253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.441300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.834 [2024-04-24 21:25:42.441349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.441396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.441440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.441489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.441544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.441588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.441634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.441681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.441736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.441787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.441971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.442015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.442058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.442103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.442148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.442191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.442240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.442283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.442333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.442367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.442412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.442464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.442505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.442545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.442588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.442634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.442673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.443397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.443449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.443502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.443554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.443604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.443657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.443709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.443761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.443807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.443854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.443899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.443949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.443993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.444967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.445961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.446013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.446062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.446108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.446152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.446201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.446251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.446297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.446486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.446534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.446579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.446629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.835 [2024-04-24 21:25:42.446677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.446724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.446769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.446824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.446871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.446920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.446966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.447983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.448966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.449015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.449065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.449112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.449155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.449205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.449251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.449302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.449352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.449866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.449916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.449950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.449987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.450030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.450072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.450114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.450157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.450195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.450236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:19.836 [2024-04-24 21:25:42.450279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.450326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.450366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.450401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.450438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.450497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.450530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.450580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.450631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.836 [2024-04-24 21:25:42.450676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.450727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.450778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.450826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.450871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.450919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.450967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.451981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.452020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.452068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.452114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.452161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.452206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.452251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.452294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.452337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.452382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.452429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.452487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.452537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.452585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.452630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.452679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.452730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.453218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.453270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.453320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.453368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.453417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.453469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.453520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.453567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.453612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.453659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.453706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.453761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.453810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.453860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.453911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.453968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.454983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.455025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.455070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.455115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.455160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.455195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.455234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.455276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.455317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.455356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.455403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.455439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.455487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.455530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.455569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.837 [2024-04-24 21:25:42.455610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.455650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.455694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.455744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.455790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.455839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.455890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.455939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.455989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.456038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.456096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.456864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.456914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.456958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.457982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.458999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.459048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.459094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.459144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.459189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.459238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.459282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.459329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.459378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.459425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.459483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.459533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.459583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.459629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.459682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.459726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.459774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.459973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.838 [2024-04-24 21:25:42.460931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.460975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.461973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.462014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.462057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.462100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.462144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.462188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.462231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.462273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.462320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.462365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.462414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.462467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.462517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.462569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.462617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.462661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.462708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.462760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.462815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.463285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.463330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.463374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.463414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.463462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.463505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.463549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.463595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.463632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.463675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.463706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.463736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.463777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.463821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.463863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.463905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.463936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.463978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.464999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.465047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.465097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.839 [2024-04-24 21:25:42.465145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.465193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.465237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.465281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.465330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.465387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.465432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.465484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.465534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.465585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.465633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.465681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.465729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.465774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.465820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.465861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.465909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.466368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.466419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.466464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.466511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.466554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.466600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.466637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.466681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.466734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.466783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.466827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.466874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.466923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.466965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.467991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.468996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.469045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.469088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.469129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.469173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.469222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.469262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.469828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.469883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.469934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.469991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.470036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.470084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.470131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.470176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.470222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.470268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.470320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.470369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.470414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.470466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.470510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.470554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.840 [2024-04-24 21:25:42.470598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.470647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.470692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.470734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.470770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.470812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.470853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.470894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.470939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.470983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.471989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.472040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.472089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.472136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.472184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.472232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.472275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.472324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.472377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.472423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.472475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.472524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.472575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.472630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.472676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.472862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.473204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.473251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.473296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.473333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.473379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.473422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.473473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.473513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.473556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.473600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.473648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.473687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.473734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.473783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.473836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.473886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.473941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.473987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.474040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.474086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.474131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.474177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.474234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.474276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.474321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.474366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.474417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.474470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.474517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.474566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.474611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.474654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.474706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.841 [2024-04-24 21:25:42.474750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.474800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.474848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.474897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.474949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.474994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.475986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.476029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.476073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.476112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.476939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.476993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.477973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.478982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.479037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.479086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.479134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.479178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.479227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.479275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.479323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.479368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.479418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.479470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.479513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.479558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.479612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.479661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.479713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.479760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.479810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.479858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.480073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.842 [2024-04-24 21:25:42.480122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.480999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.481996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.482921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.483781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.483829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.483869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.483912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.483953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.484960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.485013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.485052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.485095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.485141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.485183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.485223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.485270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.485310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.485364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.485400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.843 [2024-04-24 21:25:42.485443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.485492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.485534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.485577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.485618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.485660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.485703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.485750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.485804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.485855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.485903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.485947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.486000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.486059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.486106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.486152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.486200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.486250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.486300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.486349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.486400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.486458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.486505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.486554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.486602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.486648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.486697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.486746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.486931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.486980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.487029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.487077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.487130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.487182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.487229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.487275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.487328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.487379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.487424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.487482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.487532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.487583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.487628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.487675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.487724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.488173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.488220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.488262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.488306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.488350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.488399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.488448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.488495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.488537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.488581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.488628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.488669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.488708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.488752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.488802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.488849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.488889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.488934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.488983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.489981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.490020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.490059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.490101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.490142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.490180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.490223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.490264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.490310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.490355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.490400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.490456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.844 [2024-04-24 21:25:42.490500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.490542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.490586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.490625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.490673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.490715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.490758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.490801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.490844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.490887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.490937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.490989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.491041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.491230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.491281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.491333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.491380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.491429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.491481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.491528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.491576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.491627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.491672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.491723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.491771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.491814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.491862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.491912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.491962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.492969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.493997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.494051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.494106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.494620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.494676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.494729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.494776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.494825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.845 [2024-04-24 21:25:42.494875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.494921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.494968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.495978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.496998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.497047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.497087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.497130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.497178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.497223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.497271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.497319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.497360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.497402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.497446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.497497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.497540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.497726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.498438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.498491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.498536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.498589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.498634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.498684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.498729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.498779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.498836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.498886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.498934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.498980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.499968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.500021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.500071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.500115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.500166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.846 [2024-04-24 21:25:42.500214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.500262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.500304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.500348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.500389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.500430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.500480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.500520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.500562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.500605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.500654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.500701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.500749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.500791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.500837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.500881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.500934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.500974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.501018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.501060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.501103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.501148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.501195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.501240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.501284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.501326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.501365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.501411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.501608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.501658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.501704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.501747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.501795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.501850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.501902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.501951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.501997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.502976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.503948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.504000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.504046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.504098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.504150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.504206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.504256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.504304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.504353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.504401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.504447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.504508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.504560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:19.847 [2024-04-24 21:25:42.505064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.505110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.505155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.505209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.505251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.847 [2024-04-24 21:25:42.505292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.505334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.505382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.505424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.505478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.505528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.505573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.505619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.505670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.505713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.505758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.505794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.505836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.505878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.505923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.505966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.506980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.507957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.508166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.508815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.508874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.508922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.508973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.509959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.510001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.510044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.510083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.510133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.510174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.510214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.510260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.510309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.510365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.510413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.510464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.510520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.848 [2024-04-24 21:25:42.510569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.510618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.510669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.510715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.510775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.510827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.510876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.510930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.510980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.511037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.511081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.511128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.511177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.511225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.511276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.511329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.511377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.511422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.511476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.511531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.511581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.511629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.511679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.511729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.511779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.511826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.511878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.512987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.513985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.514891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.515740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.515801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.515847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.515893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.515938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.515987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.849 [2024-04-24 21:25:42.516038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.516991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.517951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.518001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.518050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.518102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.518147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.518194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.518243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.518292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.518346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.518390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.518442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.518495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.518550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.518601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.518648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.518698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.518887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.518936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.518986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.519042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.519094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.519144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.519194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.519246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.519298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.519349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.519400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.519455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.519498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.519546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.519587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.519637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.519679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.520065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.520119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.520164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.520199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.520239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.850 [2024-04-24 21:25:42.520282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.520334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.520373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.520413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.520464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.520510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.520548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.520595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.520637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.520676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.520725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.520768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.520807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.520849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.520889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.520934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.520973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.521991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.522978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.523165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.523219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.523270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.523320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.523367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.523415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.523473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.523507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.523548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.523592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.523633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.523678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.523719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.523763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.523811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.523854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.523899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.523937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.523978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.851 [2024-04-24 21:25:42.524975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.525019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.525066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.525100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.525140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.525183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.525230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.525776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.525842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.525896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.525945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.525994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.526967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.527995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.528042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.528094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.528139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.528187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.528232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.528280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.528331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.528378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.528430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.528482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.528529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.528574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.528628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.528677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.528722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.528770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.528949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.528987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.529032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.529079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.529123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.529171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.529213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.529259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.529303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.529347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.529391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.529430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.529481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.529526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.529577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.529632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.529680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.530340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.530384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.530430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.530474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.530519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.530572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.530618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.530658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.852 [2024-04-24 21:25:42.530693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.530736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.530779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.530826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.530866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.530907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.530958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.531989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.532982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.533028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.533078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.533123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.533168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.533217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.533263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.533461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.533507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.533553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.533600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.533642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.533690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.533730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.533775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.533809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.533854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.533893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.533935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.533987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.534967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.535011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.535071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.535122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.535171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.535218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.535266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.853 [2024-04-24 21:25:42.535313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.535358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.535413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.535465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.535511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.535561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.535612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.535657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.535707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.535759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.535808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.535854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.535901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.535945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.535995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.536043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.536092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.536142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.536192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.536239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.536291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.536335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.536891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.536946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.536996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.537965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.538982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.539029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.539069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.539112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.539157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.539202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.539252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.539297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.539343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.539387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.539423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.539472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.539512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.539555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.539599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.539647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.539684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.854 [2024-04-24 21:25:42.539725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.539767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.540269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.540324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.540376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.540430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.540489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.540540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.540586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.540632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.540686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.540736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.540787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.540829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.540881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.540931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.540980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.541978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.542984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.543032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.543083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.543130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.543184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.543690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.543735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.543784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.543833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.543885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.543935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.543983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.544035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.544081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.544126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.544171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.544216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.544258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.544298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.544343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.544388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.544439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.544483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.544520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.544567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.544613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.544652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.544699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.544743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 true 00:10:19.855 [2024-04-24 21:25:42.544787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.544831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.855 [2024-04-24 21:25:42.544870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.544915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.544966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.545995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.546038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.546081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.546122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.546161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.546205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.546249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.546289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.546329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.546371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.546412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.546461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.546507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.546550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.546585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.547426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.547485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.547534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.547580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.547630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.547683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.547731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.547780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.547824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.547875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.547919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.547969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.548989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.856 [2024-04-24 21:25:42.549925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.549967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.550009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.550055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.550097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.550148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.550190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.550242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.550290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.550338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.550557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.550601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.550644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.550687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.550728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.550780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.550830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.550880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.550928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.550979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.551991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.552982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.553022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.553063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.553108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.553153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.553188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.553231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.553273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.553320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.553358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.553398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.553941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.553995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.554045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.554092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.554138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.554190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.554244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.554293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.554345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.554398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.554448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.554504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.554552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.554613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.554660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.554709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.554757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.554811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.857 [2024-04-24 21:25:42.554855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.554906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.554958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.555968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.556932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.557464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.557521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:19.858 [2024-04-24 21:25:42.557572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.557619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.557670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.557718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.557773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.557827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.557870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.557908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.557954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.557996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.558998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.559046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.559090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.559135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.559186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.559236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.559287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.559334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.559386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.559437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.559496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.559546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.559597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.559644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.559691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.559747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.559798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.559844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.858 [2024-04-24 21:25:42.559892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.559940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.559988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.560035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.560084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.560129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.560176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.560225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.560275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.560321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.560368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.560417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.560907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.560945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.560994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.561960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.562997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.563048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.563098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.563149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.563193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.563243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.563294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.563345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.563394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.563442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.563496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.563546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.563593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.563648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.563698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.563744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.563791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.563840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.564333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.564382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.564427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.564483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.564524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.564564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.564605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.564650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.564691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.564736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.859 [2024-04-24 21:25:42.564781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.564829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.564868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.564912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.564958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.564999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.565987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.566973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.567015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.567062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.567108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.567153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.567341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.567693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.567744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.567792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.567841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.567881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.567927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.567987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.568035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.568083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.568131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.568180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.568229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.568276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.568327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.568373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.568423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.568491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.568546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.568606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.568652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.568699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.568746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.568791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 21:25:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:19.860 [2024-04-24 21:25:42.568835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.568878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.568915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 21:25:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.860 [2024-04-24 21:25:42.568957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.569000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.569040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.569091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.569134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.569182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.569221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.569262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.569303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.569347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.569389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.569422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.569468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.569511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.569552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.569592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.569633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.569675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.569717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.860 [2024-04-24 21:25:42.569765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.569809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.569852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.569896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.569932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.569982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.570032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.570081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.570128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.570175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.570235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.570280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.570329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.570375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.570429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.570482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.570527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.570577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.571080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.571128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.571182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.571230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.571274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.571322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.571375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.571423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.571479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.571526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.571581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.571629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.571676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.571723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.571780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.571827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.571870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.571919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.571961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.572976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.573941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.574428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.574482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.574520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.574568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.574612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.574650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.574691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.574736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.574780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.574824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.574869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.574918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.574976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.575024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.575077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.575126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.575172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.575222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.575274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.575321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.575374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.575416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.575471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.861 [2024-04-24 21:25:42.575515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.575565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.575612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.575658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.575706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.575756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.575801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.575849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.575902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.575945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.576978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.577024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.577065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.577116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.577162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.577207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.577249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.577296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.577336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.577894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.577945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.577989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.578967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.579964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.580013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.580050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.580093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.580135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.580176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.580225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.580271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.580310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.580349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.580392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.580441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.580500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.580544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.580590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.580638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.580687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.862 [2024-04-24 21:25:42.580732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.580776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.580825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.581011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.581358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.581409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.581467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.581516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.581561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.581608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.581661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.581706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.581751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.581801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.581845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.581900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.581941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.581985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.582985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.583975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.584019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.584069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.584119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.584167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.584647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.584692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.584734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.584772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.584810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.584852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.584902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.584949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.584998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.863 [2024-04-24 21:25:42.585927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.585969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.586988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.587034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.587082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.587134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.587178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.587229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.587275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.587321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.587364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.587408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.587456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.587496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.587535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.587578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.588980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.589965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.590011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.590053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.590095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.590134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.590177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.590225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.590273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.590322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.590368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.590422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.590485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.590537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.590584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.864 [2024-04-24 21:25:42.590635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.590683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.590739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.590790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.590836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.590890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.590934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.590983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.591447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.591502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.591542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.591585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.591621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.591660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.591704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.591747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.591787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.591831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.591869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.591920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.591971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.592968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.593973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.594019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.594066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.594116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.594163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.594205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.594249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.594303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.594342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.594826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.594873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.594917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.594958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.595947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.596004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.596053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.596102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.596149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.596201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.596258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.596305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.596356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.596406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.596462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.596512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.596558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.865 [2024-04-24 21:25:42.596605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.596656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.596704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.596751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.596801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.596854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.596915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.596964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.597012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.597060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.597109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.597153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.597201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.597246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.597288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.597332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.597381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.597419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.597471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.597521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.597565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.597613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.597651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.597696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.597736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.597787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.598264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.598303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.598344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.598391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.598431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.598488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.598527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.598567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.598612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.598651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.598699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.598739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.598780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.598826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.598873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.598919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.598972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.599957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.600984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.601027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.601071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.601123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.601624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.601675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.601723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.601767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.601823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.601870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.601919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.601965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.602019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.602065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.602112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.602159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.602210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.602258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.602305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.602351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.602401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.602456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.602501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.602546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.602590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.602631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.602677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.602716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.866 [2024-04-24 21:25:42.602756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.602791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.602830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.602869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.602914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.602956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.603967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.604010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.604055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.604098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.604141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.604186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.604235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.604291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.604334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.604382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.604430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.604925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.604973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.605979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.606998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.607042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.607086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.607136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.607189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.607241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.607290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.607335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.607382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.607432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.607486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.607531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.607578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.607623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.607670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.607715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.607767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.607813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.608299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.608350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.608406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.608463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.608510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.608558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.608604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.608654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.608708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.608754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.608802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.608849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.608900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.608952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.867 [2024-04-24 21:25:42.608996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.609974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.610955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.611007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.611062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.611113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.611160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.611650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.611692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.611740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.611781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:19.868 [2024-04-24 21:25:42.611823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.611858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.611899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.611941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.611986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.612981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.613033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.613086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.613132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.613182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.613234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.613285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.613324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.613368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.613413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.613464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.868 [2024-04-24 21:25:42.613503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.613543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.613591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.613630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.613666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.613705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.613750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.613794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.613839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.613880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.613921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.613958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.614000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.614053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.614105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.614155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.614201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.614246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.614295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.614344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.614391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.614438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.614488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.614976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.615962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.616996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.617041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.617093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.617144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.617194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.617243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.617288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.617336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.617382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.617436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.617486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.617530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.617582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.617628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.617679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.617727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.617777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.617832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.618311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.618354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.618393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.618434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.618485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.618533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.618578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.618622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.618665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.618701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.618743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.618787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.618835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.618878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.618920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.618964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.619973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.620014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.620054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.620096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.620134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.620181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.869 [2024-04-24 21:25:42.620231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.620280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.620328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.620376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.620425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.620480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.620526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.620573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.620624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.620672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.620721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.620770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.620816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.620868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.620916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.620965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.621014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.621061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.621115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.621161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.621204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.621697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.621744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.621786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.621827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.621872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.621914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.621958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.621997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.622981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.623979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.624027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.624076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.624121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.624167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.624217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.624268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.624316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.624358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.624403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.624446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.624488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.624532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.624574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.624620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.624662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.625130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.625178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.625221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.625262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.625306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.625355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.625387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.625428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.625476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.625519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.625565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.625610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.870 [2024-04-24 21:25:42.625654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.625696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.625737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.625786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.625829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.625876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.625929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.625980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.626979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.627990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.628495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.628548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.628600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.628644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.628693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.628734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.628781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.628822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.628870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.628903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.628950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.628990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.629029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.629070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.629117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.629165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.629207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.629255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.629299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.629343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.629386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.629433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.629481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.629522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.629575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.629619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.871 [2024-04-24 21:25:42.629664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.629717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.629769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.629814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.629862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.629910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.629962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.630977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.631019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.631053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.631100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.631141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.631187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.631232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.631273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.631318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.631810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.631861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.631911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.631961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.632987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.633028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.633070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.633115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.633169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.633215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.633262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.872 [2024-04-24 21:25:42.633310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.633359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.633408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.633466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.633512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.633560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.633612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.633657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.633710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.633755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.633806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.633850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.633893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.633936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.633987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.634030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.634070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.634116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.634159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.634203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.634249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.634294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.634336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.634380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.634428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.634477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.634520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.634564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.634607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.634654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.634704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.635211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.635265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.635313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.635366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.635412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.635468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.635515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.635565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.635608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.635650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.635703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.635746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.635779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.635822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.635867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.635915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.635959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.635999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.636970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.637998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.638046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.638091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.638138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.638626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.638676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.638716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.873 [2024-04-24 21:25:42.638755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.638806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.638855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.638900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.638946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.638989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.639989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.640957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.641010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.641058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.641104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.641146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.641193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.641239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.641277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.641319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.641364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.641408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.641467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.641511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.642966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.643015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.643065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.643104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.643148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.643190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.643237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.643277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.643321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.643369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.643409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.643465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.643512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.643551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.643586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.643626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.643667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.874 [2024-04-24 21:25:42.643715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.643757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.643798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.643840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.643880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.643924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.643970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.644011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.644060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.644107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.644159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.644205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.644256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.644302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.644349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.644402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.644456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.644509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.644555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.644608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.644658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.644707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.644753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.644800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.644848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.644895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.645393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.645445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.645504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.645548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.645594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.645642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.645691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.645737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.645783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.645827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.645882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.645926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.645970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.646961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.647958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.648020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.648066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.648115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.648161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.648204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.648244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.648285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.648759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.648804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.648848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.648893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.648934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.648978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.649026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.649071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.649114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.875 [2024-04-24 21:25:42.649155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.649202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.649255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.649301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.649349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.649393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.649446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.649498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.649546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.649594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.649647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.649698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.649746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.649797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.649843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.649900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.649942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.649991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.650989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.651034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.651080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.651123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.651169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.651216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.651269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.651313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.651362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.651408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.651467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.651515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.651566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.651615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.652111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.652163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.652209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.652256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.652304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.652357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.652400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.652448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.652505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.652561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.652606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.652654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.652702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.652747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.652807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.652854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.652899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.652946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.652995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.653961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.654004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.654037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.654077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.876 [2024-04-24 21:25:42.654117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.654163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.654204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.654249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.654292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.654339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.654388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.654435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.654483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.654524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.654572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.654617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.654662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.654710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.654759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.654803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.654852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.654903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.654957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.655009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.655499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.655546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.655590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.655624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.655669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.655711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.655757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.655795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.655837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.655877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.655924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.655966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.656993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.657954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.658000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.658046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.658091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.658134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.658175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.658212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.877 [2024-04-24 21:25:42.658258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.658303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.658862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.658908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.658956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.659969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.660981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.661030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.661074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.661120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.661172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.661224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.661270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.661316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.661357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.661400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.661443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.661498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.661534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.661577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.661618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.661662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.661705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.661747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.661789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.661835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.662322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.662369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.662414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.662469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.662509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.662551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.662591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.662632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.662672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.662716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.662763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.662806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.662843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.662892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.662940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.662991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.663042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.663092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.663143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.663191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.663235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.663283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.663328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.663378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.663431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.663483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.663530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.663578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.663631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.663674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.663717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.663758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.878 [2024-04-24 21:25:42.663801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.663843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.663878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.663923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.663963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.664975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.665023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.665069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.665114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:19.879 [2024-04-24 21:25:42.665601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.665651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.665699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.665745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.665787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.665831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.665878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.665920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.665964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.666957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.667999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.668050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.668097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.668147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.668197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.668247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.668296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.668344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.668394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.668445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.668491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.668539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.668584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.669054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.669102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.669150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.879 [2024-04-24 21:25:42.669197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.669233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.669282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.669329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.669370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.669417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.669470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.669511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.669555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.669596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.669640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.669682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.669724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.669762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.669814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.669857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.669907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.669955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.669999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.670959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.671893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.672389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.672439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.672495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.672544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.672591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.672640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.672692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.672741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.672790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.672839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.672887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.672937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.672985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.673963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.674009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.674053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.674095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.880 [2024-04-24 21:25:42.674132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.674174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.674222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.674274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.674327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.674378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.674422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.674476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.674526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.674577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.674626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.674672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.674719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.674774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.674827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.674876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.674923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.674971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.675021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.675069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.675114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.675164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.675210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.675264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.675309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.675355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.675842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.675894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.675934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.675976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.676957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.677962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.678007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.678053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.678095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.678139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.678190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.678234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.678279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.678327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.678374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.678416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.678460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.881 [2024-04-24 21:25:42.678501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.882 [2024-04-24 21:25:42.678546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:19.882 [2024-04-24 21:25:42.678597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.678660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.678737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.678805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.679358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.679413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.679467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.679513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.679562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.679606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.679652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.679696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.679741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.679793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.679837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.679880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.679926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.679972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.680990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.681039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.681087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.681129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.681173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.681222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.681267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.681326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.681377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.681413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.681478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.681518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.681613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.681685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.681752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.681835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.681909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.681974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.682045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.682101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.160 [2024-04-24 21:25:42.682155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.682203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.682259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.682311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.682362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.682411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.682469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.682517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.682573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.683961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.684980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.685969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.686460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.686511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.686552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.686591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.686636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.686678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.686717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.686752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.686799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.686848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.686893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.686938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.686980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.687028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.687070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.687115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.687163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.687213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.687262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.687306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.687357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.687405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.687456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.687506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.687553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.161 [2024-04-24 21:25:42.687600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.687646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.687695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.687744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.687793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.687839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.687886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.687934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.687983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.688966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.689012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.689052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.689097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.689138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.689183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.689224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.689268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.689770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.689826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.689868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.689915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.689963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.690957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.691967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.692011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.692047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.692085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.692124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.692167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.692205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.692249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.692293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.692339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.162 [2024-04-24 21:25:42.692383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.692424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.692475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.692514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.692557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.693991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.694993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.695031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.695075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.695125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.695170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.695211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.695255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.695301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.695346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.695389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.695427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.695476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.695522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.695570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.695617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.695665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.695710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.695761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.695803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.695852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.696338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.696384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.696432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.696495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.696544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.696589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.696635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.696681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.696726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.696777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.696821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.696872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.696920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.696967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.697012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.697059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.697102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.697145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.697191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.697241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.697287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.697332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.697379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.697434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.697487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.163 [2024-04-24 21:25:42.697533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.697577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.697625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.697668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.697720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.697765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.697809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.697858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.697902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.697949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.697997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.698969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.699011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.699049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.699092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.699138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.699621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.699666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.699707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.699749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.699792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.699833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.699877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.699922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.699964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.700974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.701008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.701048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.164 [2024-04-24 21:25:42.701088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.701968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.702019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.702066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.702116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.702173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.702222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.702271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.702320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.702367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.702419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.702478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.702968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.703018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.703066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.703116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.703158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.703207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.703254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.703307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.703349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.703390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.703435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.703492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.703537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.703587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.703634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.703681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.703728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.703766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.165 [2024-04-24 21:25:42.703807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.703849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.703897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.703943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.703987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.704998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.705047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.705098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.705149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.705200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.705254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.705301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.705349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.705398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.705457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.705501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.705549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.705599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.705646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.705696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.705744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.705790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.705828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.705872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.706351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.706400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.706439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.706487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.706532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.706575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.706616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.706658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.706698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.706738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.706781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.706826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.706873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.706917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.706970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.707021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.707070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.707124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.707181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.707226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.707276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.707325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.707377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.707433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.707498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.166 [2024-04-24 21:25:42.707547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.707594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.707642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.707689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.707737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.707785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.707829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.707872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.707912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.707956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.708956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.709006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.709055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.709100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.709145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.709188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.709236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.709294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.709832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.709881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.709932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.709987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.710983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.711027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.711067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.711106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.711149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.711187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.711229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.711271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.711315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.711361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.711403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.167 [2024-04-24 21:25:42.711441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.711493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.711537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.711576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.711622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.711665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.711708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.711752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.711800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.711852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.711904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.711953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.712007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.712057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.712105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.712153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.712198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.712249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.712308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.712357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.712402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.712459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.712506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.712557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.712606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.712652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.712696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.712741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.712790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.713291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.713340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.713392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.713436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.713487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.713540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.713589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.713638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.713682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.713731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.713778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.713826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.713882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.713927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.713976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.714967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.715014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.715055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.715102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.715145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.715190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.715233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.715276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.715316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.715359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.715404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.168 [2024-04-24 21:25:42.715456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.715508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.715556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.715603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.715652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.715699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.715747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.715796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.715848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.715893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.715941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.715986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.716038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.716080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.716136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.716182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.716228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.716264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.716787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.716839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.716889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.716939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.716984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:20.169 [2024-04-24 21:25:42.717379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.717992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.718979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.719035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.719083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.719129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.719179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.719226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.169 [2024-04-24 21:25:42.719279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.719327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.719374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.719423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.719480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.719534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.719574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.719616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.719659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.719706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.720211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.720255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.720296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.720335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.720378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.720437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.720489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.720542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.720592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.720644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.720694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.720746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.720796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.720847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.720898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.720946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.720996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.721986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.722977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.723020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.723067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.723102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.723144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.723682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.723730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.723777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.723825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.723870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.723918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.723968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.724018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.724065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.724111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.724154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.170 [2024-04-24 21:25:42.724205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.724251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.724299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.724345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.724392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.724438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.724491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.724542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.724585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.724625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.724669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.724710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.724752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.724792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.724837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.724873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.724921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.724964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.725956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.726008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.726054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.726103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.726153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.726202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.726248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.726295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.726345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.726395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.726445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.726496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.726551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.727966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.728014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.728054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.728095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.171 [2024-04-24 21:25:42.728137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.728171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.728214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.728258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.728304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.728344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.728390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.728433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.728482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.728522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.728557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.728599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.728642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.728688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.728733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.728774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.728815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.728864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.728900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.728950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.729983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.172 21:25:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.172 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.172 [2024-04-24 21:25:42.929611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.929672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.929715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.929758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.929801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.929843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.929887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.929943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.929987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.930985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.172 [2024-04-24 21:25:42.931950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.931988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.932021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.932064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.932103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.932138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.932181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.932221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.932265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.932304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.932344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.932388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.932899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.932958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.933964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.934974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.935022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.935070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.935115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.935161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.935207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.935249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.935297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.935336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.935387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.935433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.935486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.935529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.935579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.935627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.935671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.935715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.935768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.936249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.936295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.936336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.936376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.936419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.936466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.936517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.936557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.936604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.936643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.936690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.936736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.936769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.936810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.936850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.936895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.936932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.936970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.937008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.937050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.173 [2024-04-24 21:25:42.937091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.937998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.938997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.939496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.939546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.939588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.939627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.939674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.939713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.939754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.939798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.939842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.939885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.939930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.939979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.940998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.174 [2024-04-24 21:25:42.941955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.941998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.942038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.942079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.942116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.942161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.942209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.942256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.942302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.942815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.942868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.942913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.942960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.943982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.944981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.945021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.945063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.945111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.945150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.945191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.945230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.945273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.945316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.945357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.945406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.945459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.945509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.945552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.945597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.945649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.946128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.946173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.946213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.946254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.946292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.946327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.946367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.946407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.946448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.946494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.946538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.946584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.946623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.946670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.175 [2024-04-24 21:25:42.946713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.946744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.946792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.946834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.946878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.946921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.946966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.947991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.948958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.949462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.949518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.949570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.949620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.949666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.949716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.949770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.949822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.949870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.949913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.949956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.950988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.951038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.951078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.951133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.951167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.951208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.951245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.951288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.951335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.176 [2024-04-24 21:25:42.951377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.951421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.951469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.951520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.951561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.951604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.951642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.951679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.951725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.951768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.951811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.951854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.951900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.951938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.951981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.952025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.952067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.952105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.952144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.952189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.952235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.952758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.952806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.952849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.952891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.952935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.952979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.953962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.954962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.955010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.955056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.955099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.955150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.955201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.955249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.955296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.955344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.955394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.955444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.955499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.955547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.955597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.955642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.956142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.956190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.956240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.956298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.956347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.956397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.956443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.956501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.956547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.956593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.956641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.956688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.956735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.177 [2024-04-24 21:25:42.956784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.956835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.956880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.956928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.956980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 21:25:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:10:20.178 [2024-04-24 21:25:42.957865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.957993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 21:25:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:20.178 [2024-04-24 21:25:42.958247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.958928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 Message suppressed 999 times: [2024-04-24 21:25:42.959430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 Read completed with error (sct=0, sc=15) 00:10:20.178 [2024-04-24 21:25:42.959487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.959529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.959564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.959605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.959651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.959699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.959742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.959786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.959828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.959874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.959911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.959949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.959994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.960990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.961033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.961079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.961127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.961176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.961222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.961268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.178 [2024-04-24 21:25:42.961310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.961360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.961401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.961446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.961490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.961529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.961571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.961614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.961656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.961695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.961739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.961787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.961838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.961891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.961938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.961987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.962045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.962092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.962140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.962643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.962694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.962742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.962790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.962843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.962892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.962940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.962989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.963987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.964974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.965013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.965061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.965094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.965141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.965184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.965232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.965272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.965319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.965365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.965408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.965461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.965497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.965544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.966076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.966133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.966182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.966228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.966276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.966325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.966380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.966425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.966481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.966531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.966571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.966610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.966647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.966687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.966727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.966777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.179 [2024-04-24 21:25:42.966821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.966865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.966912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.966951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.966993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.967971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.968933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.969390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.969439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.969488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.969536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.969579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.969623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.969668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.969708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.969749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.969794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.969837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.969885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.969940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.969986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.970990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.180 [2024-04-24 21:25:42.971037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.971974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.972015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.972061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.972104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.972145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.972195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.972239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.972283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.972326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.972850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.972908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.972955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.973997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.974994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.975043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.975087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.975130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.975181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.975229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.975284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.975331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.975376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.975427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.975478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.975522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.975569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.975616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.975660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.975709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.976171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.976214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.976264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.181 [2024-04-24 21:25:42.976305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.976345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.976378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.976422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.976466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.976508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.976549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.976593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.976634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.976671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.976710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.976755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.976795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.976835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.976877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.976921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.976964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.977988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.978956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.979426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.979484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.979524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.979567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.979611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.979653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.979697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.979738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.979785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.979829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.979868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.979913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.979960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.980954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.981000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.981051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.981098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.182 [2024-04-24 21:25:42.981146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.981195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.981238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.981272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.981312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.981350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.981393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.981433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.981480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.981524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.981560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.981605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.981649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.981698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.981740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.981784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.981828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.981870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.981916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.981961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.982012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.982056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.982110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.982166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.982211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.982256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.982305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.982806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.982857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.982905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.982950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.983998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.984980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.985031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.985080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.985133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.985182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.985227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.985274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.985321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.985378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.985427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.985479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.985529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.985578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.985622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.985669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.986151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.986207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.986249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.986293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.986332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.986375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.986410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.183 [2024-04-24 21:25:42.986463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.986506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.986552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.986593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.986640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.986679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.986722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.986764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.986807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.986854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.986895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.986941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.986989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.987950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.988976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.989022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.989073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.989580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.989629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.989674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.989719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.989769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.989811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.989857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.989890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.989940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.989980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.990999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.991049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.991095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.991143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.991196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.991255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.991304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.991353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.991395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.184 [2024-04-24 21:25:42.991441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.991495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.991543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.991589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.991624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.991663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.991708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.991756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.991802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.991841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.991888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.991928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.991966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.992008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.992055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.992107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.992153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.992203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.992249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.992307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.992351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.992400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.992893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.992952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.992992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.993964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.994988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.995034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.995092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.995140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.185 [2024-04-24 21:25:42.995188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.995231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.995278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.995328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.995383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.995430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.995492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.995537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.995587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.995637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.995689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.995733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.995787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.996274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.996325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.996379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.996428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.996481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.996526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.996574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.996619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.996664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.996710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.996744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.996789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.996835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.996889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.996931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.996971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.997965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.998954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.999002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.999053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.999096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.999146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.999619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.999666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.999713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.999759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.999805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.999848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.999898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.999933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:42.999977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:43.000013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:43.000055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:43.000098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:43.000142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:43.000183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:43.000229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:43.000270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.186 [2024-04-24 21:25:43.000313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.000358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.000395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.000430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.000487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.000527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.000576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.000623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.000670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.000714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.000765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.000818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.000869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.000921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.000973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.001965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.002015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.002059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.002105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.002153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.002207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.002257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.002302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.002352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.002403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.002456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.002504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.002970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.003968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.004991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.005029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.005074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.005118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.005159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.187 [2024-04-24 21:25:43.005206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.005245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.005286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.005329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.005381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.005427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.005477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.005524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.005573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.005635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.005676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.005725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.005773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.006269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.006315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.006364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.006398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.006445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.006490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.006534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.006580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.006618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.006659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.006699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.006741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.006780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.006821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.006865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.006906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.006945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.006986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.007954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.008981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.009020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.009068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.009573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.009625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.009672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.009722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.009770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.009816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.009862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.009912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.009960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.010005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.010053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.010107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.010154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:20.188 [2024-04-24 21:25:43.010200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.010254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.010298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.010347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.010396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.010455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.010510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.010562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.010610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.188 [2024-04-24 21:25:43.010657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.010709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.010761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.010806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.010852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.010899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.010942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.010986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.011993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.012039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.012083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.012122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.012172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.012223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.012268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.012312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.012362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.012416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.012468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.012972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.013970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.014011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.014057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.014100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.014147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.014191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.014240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.014287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.014336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.014380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.014429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.014479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.189 [2024-04-24 21:25:43.014525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.014577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.014630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.014677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.014727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.014778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.014824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.014870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.014919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.014964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.015016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.015060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.015105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.015154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.015206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.015256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.015303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.015349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.015402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.015462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.015509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.015554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.015594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.015627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.015668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.015709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.015755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.015798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.015846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.016310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.016353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.016396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.016442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.016495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.016538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.016583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.016627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.016671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.016708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.016758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.016803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.016853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.016904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.016946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.016989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.017953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.018999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.019044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.019082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.019125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.019635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.019687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.019738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.019781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.019827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.019877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.019921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.190 [2024-04-24 21:25:43.019969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.020972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.021996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.022045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.022102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.022151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.022200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.022253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.022301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.022348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.022401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.022446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.022501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.022545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.022592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.023989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.024982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.025029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.025081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.025130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.025181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.025231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.025275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.025323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.191 [2024-04-24 21:25:43.025356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.025395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.025437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.025481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.025523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.025569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.025609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.025650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.025692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.025738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.025778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.025811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.025858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.026425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.026488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.026535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.026585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.026637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.026681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.026730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.026780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.026835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.026883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.026930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.026977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.027948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.028984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.029034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.029083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.029137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.029183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.029230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.029278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.029326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.192 [2024-04-24 21:25:43.029370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.029925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.030002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.030074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.030159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.030246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.030316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.030377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.030429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.030475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.030517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.030563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.030612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.030669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.030723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.030772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.030828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.030874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.030924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.030973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.031961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.032011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.032073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.032128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.032175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.032229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.461 [2024-04-24 21:25:43.032277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.032327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.032377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.032423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.032482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.032531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.032568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.032613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.032657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.032699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.032750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.032794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.032843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.032889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.032938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.032989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.033029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.033066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.033105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.033658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.033715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.033768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.033818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.033871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.033919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.033974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.034982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.035956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.036000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.036036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.036078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.036121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.036169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.036212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.036254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.036297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.036342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.036379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.036428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.036493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.036546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.036593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.036640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.036688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.037175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.037232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.037284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.037333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.037379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.037427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.037486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.037540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.037588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.037629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.037675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.037710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.037753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.037797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.037845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.037891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.037939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.037985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.038984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.039038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.039088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.039134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.039178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.039226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.039273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.039320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.462 [2024-04-24 21:25:43.039370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.039415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.039473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.039518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.039564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.039612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.039658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.039704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.039749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.039793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.039826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.039868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.039914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.039955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.039998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.040047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.040514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.040558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.040602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.040644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.040688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.040739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.040782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.040821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.040867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.040911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.040960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.041993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.042967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.043009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.043057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.043101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.043144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.043179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.043222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.043265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.043306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.043360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.043401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.043442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.043981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.044966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.045962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.046008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.046056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.046107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.046159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.046206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.046249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.046298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.046351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.046401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.046448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.046501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.046550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.046599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.463 [2024-04-24 21:25:43.046653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.046705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.046752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.046800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.046849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.046906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.047374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.047429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.047488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.047533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.047579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.047619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.047667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.047710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.047754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.047796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.047837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.047877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.047916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.047958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.048970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.049974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.050008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.050051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.050092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.050137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.050187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.050231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.050279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.050749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.050795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.050843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.050886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.050930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.050969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.051961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.052965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.464 [2024-04-24 21:25:43.053008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.053045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.053098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.053150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.053195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.053246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.053294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.053344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.053391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.053437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.053494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.053537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.053583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.053630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.054138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.054194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.054245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.054293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.054339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.054386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.054443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.054497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.054551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.054599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.054647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.054690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.054742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.054789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.054832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.054866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.054904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.054947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.054990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.055976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.056987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.057032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.057502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.057551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.057595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.057640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.057677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.057721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.057761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.057805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.057846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.057886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.057935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.057977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.058998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.059032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.059073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.059114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.059156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.059197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.059242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.059289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.059341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.059386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.059428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.059474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.059513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.059562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.059605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.059643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.059686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.465 [2024-04-24 21:25:43.059730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.059773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.059813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.059852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.059896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.059941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.059988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.060033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.060077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.060123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.060170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.060226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.060273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.060322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.060849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.060900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.060957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.061992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.062999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.063046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.063099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.063150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.063197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.063248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.063296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.063346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.063392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.063440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.063498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.063548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.063595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.063652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.063698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.063745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.063790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.064266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.064306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.064351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.064392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.064438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.064493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.064534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.064577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.064621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.064672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.064715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.064752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.064792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.064831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.064877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:20.466 [2024-04-24 21:25:43.064920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.064962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.065968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.466 [2024-04-24 21:25:43.066871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.066913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.066954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.066999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.067039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.067551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.067608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.067663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.067710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.067756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.067805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.067862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.067918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.067967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.068959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.069992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.070034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.070076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.070120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.070161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.070207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.070251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.070294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.070339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.070390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.070440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.070492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.070985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.071976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.072999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.073047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.073095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.073141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.073189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.073240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.073291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.073339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.073387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.073435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.073493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.073548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.073598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.073639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.073683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.073728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.073772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.073812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.073856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.074339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.074383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.074432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.074487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.074531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.467 [2024-04-24 21:25:43.074578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.074622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.074661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.074711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.074755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.074809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.074856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.074910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.074956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.075946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.076978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.077019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.077061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.077103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.077143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.077184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.077228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.077276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.077781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.077833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.077879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.077927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.077978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.078956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.079982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.080028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.080069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.080108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.080152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.080200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.080243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.080291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.080345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.080389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.080435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.080489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.468 [2024-04-24 21:25:43.080538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.080587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.081075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.081126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.081175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.081220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.081270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.081322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.081367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.081417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.081473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.081519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.081566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.081619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.081671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.081718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.081764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.081819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.081866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.081912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.081956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.082976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.083972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.084483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.084538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.084585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.084630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.084681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.084737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.084783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.084826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.084869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.084909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.084948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.084991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.085952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.086973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.087020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.087064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.087108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.087143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.087185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.087227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.087280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.087323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.087908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.087959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.088003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.088053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.088100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.088148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.088198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.088246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.469 [2024-04-24 21:25:43.088295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.088348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.088395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.088454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.088504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.088553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.088605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.088651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.088701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.088748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.088792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.088842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.088893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.088942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.088993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.089976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.090942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.091405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.091448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.091501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.091546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.091588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.091631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.091676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.091715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.091775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.091820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.091868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.091912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.091957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.092998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.093966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.094013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.094060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.094112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.094162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.094210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.094258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.094773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.094819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.094861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.094904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.094944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.094984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.095024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.095069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.095119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.095159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.095208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.095252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.095285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.095317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.095349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.095394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.470 [2024-04-24 21:25:43.095434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.095481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.095521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.095567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.095614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.095662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.095708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.095756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.095806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.095854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.095904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.095956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.096970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.097018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.097061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.097112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.097164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.097211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.097256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.097306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.097364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.097410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.097462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.097508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.097555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.097607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.097653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.097701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.097746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.098216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.098265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.098320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.098360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.098402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.098438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.098488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.098534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.098576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.098616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.098658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.098703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.098747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.098786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.098827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.098876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.098924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.098970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.099958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.100995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.101042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.101556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.101607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.101654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.101698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.101745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.101800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.101846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.101894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.101946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.101999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.102049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.471 [2024-04-24 21:25:43.102099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.102149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.102201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.102255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.102302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.102351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.102400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.102456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.102501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.102541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.102581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.102625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.102659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.102707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.102751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.102793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.102839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.102893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.102939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.102982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.103980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.104025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.104073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.104124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.104175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.104225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.104273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.104318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.104366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.104411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.104462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.104944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.104996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.105997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.106966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.107008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.107056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.107099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.107142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.107190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.107234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.107274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.107318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.107358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.107399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.107439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.107492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.107541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.107582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.107626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.107667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.107715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.107762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.107808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.108313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.108366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.108418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.108471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.108523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.108579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.108629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.108676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.108723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.108770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.108829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.108873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.108923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.108974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.109024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.109076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.109125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.109172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.109219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.109265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.472 [2024-04-24 21:25:43.109315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.109363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.109415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.109458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.109505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.109550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.109591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.109635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.109678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.109718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.109765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.109809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.109858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.109902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.109941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.109975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.110978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.111025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.111073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.111125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.111179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.111224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.111270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.111760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.111811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.111858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.111902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.111944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.111992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.112989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.113980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.114026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.114073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.114120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.114169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.114208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.114251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.114297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.114332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.114374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.114415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.114469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.114517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.114563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.115194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.115240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.115281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.115324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.115370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.115423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.115482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.115527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.115572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.115619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.115674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.115723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.115770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.115816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.115870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.115918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.115968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.116019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.116066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.116115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.116168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.116220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.116268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.116310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.116357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.116400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.116454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.116502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.473 [2024-04-24 21:25:43.116559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.116605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.116654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.116702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.116750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.116795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.116843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.116893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.116944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.116991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.117971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.118008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.118049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.118093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.118133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.118176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.118226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.118711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.118758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.118809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.118851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.118894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.118927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.118972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:20.474 [2024-04-24 21:25:43.119017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.119954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.120974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.121020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.121062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.121106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.121151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.121185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.121226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.121269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.121312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.121354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.121403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.121441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.121490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.122039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.122092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.122143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.122194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.122241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.122287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.122330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.122381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.122430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.122485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.122532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.122580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.122628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.122675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.122727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.474 [2024-04-24 21:25:43.122774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.122819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.122871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.122919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 true 00:10:20.475 [2024-04-24 21:25:43.122964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.123971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.124998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.125473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.125526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.125559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.125605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.125648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.125693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.125741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.125786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.125827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.125876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.125918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.125962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.126988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.127994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.128045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.128091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.128132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.128166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.128212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.128255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.128296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.128348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.128876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.128929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.128980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.129027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.129076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.129128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.129178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.129223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.129271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.129317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.129367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.129412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.129470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.129527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.129571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.129615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.129670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.475 [2024-04-24 21:25:43.129715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.129763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.129809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.129856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.129913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.129960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.130999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.131047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.131098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.131148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.131195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.131246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.131291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.131338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.131384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.131435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.131489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.131533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.131585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.131633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.131682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.131730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.131778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.131824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.131871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 [2024-04-24 21:25:43.132353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:20.476 21:25:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:20.476 21:25:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.409 21:25:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.668 21:25:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:10:21.668 21:25:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:21.668 true 00:10:21.926 21:25:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:21.926 21:25:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.858 21:25:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.858 21:25:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:10:22.858 21:25:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:22.858 true 00:10:22.858 21:25:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:22.858 21:25:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.115 21:25:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.373 21:25:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:10:23.373 21:25:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:23.631 true 00:10:23.631 21:25:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:23.631 21:25:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.631 21:25:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.893 21:25:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:10:23.893 21:25:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:24.151 true 00:10:24.151 21:25:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:24.151 21:25:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.151 21:25:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.408 21:25:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:10:24.408 21:25:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:24.666 true 00:10:24.666 21:25:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:24.666 21:25:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.861 21:25:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.861 21:25:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:10:25.861 21:25:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:26.119 true 00:10:26.119 21:25:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:26.119 21:25:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.052 21:25:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.052 21:25:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:10:27.053 21:25:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:27.311 true 00:10:27.311 21:25:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:27.311 21:25:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.569 21:25:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.569 21:25:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:10:27.569 21:25:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:27.826 true 00:10:27.826 21:25:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:27.826 21:25:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.200 Initializing NVMe Controllers 00:10:29.200 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:29.200 Controller IO queue size 128, less than required. 00:10:29.200 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:29.200 Controller IO queue size 128, less than required. 00:10:29.200 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:29.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:29.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:29.200 Initialization complete. Launching workers. 00:10:29.200 ======================================================== 00:10:29.200 Latency(us) 00:10:29.200 Device Information : IOPS MiB/s Average min max 00:10:29.200 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2196.47 1.07 34899.31 2108.86 1069578.91 00:10:29.200 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15036.93 7.34 8512.27 2235.77 288711.67 00:10:29.200 ======================================================== 00:10:29.200 Total : 17233.40 8.41 11875.41 2108.86 1069578.91 00:10:29.200 00:10:29.200 21:25:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.200 21:25:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:10:29.200 21:25:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:29.459 true 00:10:29.459 21:25:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2754476 00:10:29.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (2754476) - No such process 00:10:29.459 21:25:52 -- target/ns_hotplug_stress.sh@44 -- # wait 2754476 00:10:29.459 21:25:52 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:29.459 21:25:52 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:10:29.459 21:25:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:29.459 21:25:52 -- nvmf/common.sh@117 -- # sync 00:10:29.459 21:25:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:29.459 21:25:52 -- nvmf/common.sh@120 -- # set +e 00:10:29.459 21:25:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:29.459 21:25:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:29.459 rmmod nvme_tcp 00:10:29.459 rmmod nvme_fabrics 00:10:29.459 rmmod nvme_keyring 00:10:29.459 21:25:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:29.459 21:25:52 -- nvmf/common.sh@124 -- # set -e 00:10:29.459 21:25:52 -- nvmf/common.sh@125 -- # return 0 00:10:29.459 21:25:52 -- nvmf/common.sh@478 -- # '[' -n 2753931 ']' 00:10:29.459 21:25:52 -- nvmf/common.sh@479 -- # killprocess 2753931 00:10:29.459 21:25:52 -- common/autotest_common.sh@936 -- # '[' -z 2753931 ']' 00:10:29.459 21:25:52 -- common/autotest_common.sh@940 -- # kill -0 2753931 00:10:29.459 21:25:52 -- common/autotest_common.sh@941 -- # uname 00:10:29.459 21:25:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:29.459 21:25:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2753931 00:10:29.459 21:25:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:29.459 21:25:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:29.459 21:25:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2753931' 00:10:29.459 killing process with pid 2753931 00:10:29.459 21:25:52 -- common/autotest_common.sh@955 -- # kill 2753931 00:10:29.459 21:25:52 -- common/autotest_common.sh@960 -- # wait 2753931 00:10:29.717 21:25:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:29.717 21:25:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:29.717 21:25:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:29.717 21:25:52 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:29.717 21:25:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:29.717 21:25:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.717 21:25:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:29.717 21:25:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.248 21:25:54 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:32.248 00:10:32.248 real 0m42.737s 00:10:32.248 user 2m25.336s 00:10:32.248 sys 0m15.490s 00:10:32.248 21:25:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:32.248 21:25:54 -- common/autotest_common.sh@10 -- # set +x 00:10:32.248 ************************************ 00:10:32.248 END TEST nvmf_ns_hotplug_stress 00:10:32.248 ************************************ 00:10:32.248 21:25:54 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:32.248 21:25:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:32.248 21:25:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.248 21:25:54 -- common/autotest_common.sh@10 -- # set +x 00:10:32.248 ************************************ 00:10:32.248 START TEST nvmf_connect_stress 00:10:32.248 ************************************ 00:10:32.248 21:25:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:32.248 * Looking for test storage... 00:10:32.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.248 21:25:54 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.248 21:25:54 -- nvmf/common.sh@7 -- # uname -s 00:10:32.248 21:25:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.248 21:25:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.248 21:25:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.248 21:25:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.248 21:25:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.248 21:25:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.248 21:25:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.248 21:25:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.248 21:25:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.248 21:25:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.248 21:25:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:32.248 21:25:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:32.248 21:25:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.248 21:25:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.248 21:25:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.248 21:25:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.248 21:25:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.248 21:25:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.248 21:25:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.248 21:25:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.248 21:25:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.248 21:25:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.248 21:25:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.248 21:25:54 -- paths/export.sh@5 -- # export PATH 00:10:32.248 21:25:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.248 21:25:54 -- nvmf/common.sh@47 -- # : 0 00:10:32.248 21:25:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:32.248 21:25:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:32.248 21:25:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.248 21:25:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.248 21:25:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.248 21:25:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:32.248 21:25:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:32.248 21:25:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:32.248 21:25:54 -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:32.248 21:25:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:32.248 21:25:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.248 21:25:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:32.248 21:25:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:32.248 21:25:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:32.248 21:25:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.248 21:25:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:32.248 21:25:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.248 21:25:54 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:32.248 21:25:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:32.248 21:25:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:32.248 21:25:54 -- common/autotest_common.sh@10 -- # set +x 00:10:38.812 21:26:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:38.812 21:26:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:38.812 21:26:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:38.812 21:26:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:38.812 21:26:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:38.812 21:26:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:38.812 21:26:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:38.812 21:26:01 -- nvmf/common.sh@295 -- # net_devs=() 00:10:38.812 21:26:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:38.812 21:26:01 -- nvmf/common.sh@296 -- # e810=() 00:10:38.812 21:26:01 -- nvmf/common.sh@296 -- # local -ga e810 00:10:38.812 21:26:01 -- nvmf/common.sh@297 -- # x722=() 00:10:38.812 21:26:01 -- nvmf/common.sh@297 -- # local -ga x722 00:10:38.812 21:26:01 -- nvmf/common.sh@298 -- # mlx=() 00:10:38.812 21:26:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:38.812 21:26:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.812 21:26:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.812 21:26:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.812 21:26:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.813 21:26:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.813 21:26:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.813 21:26:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.813 21:26:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.813 21:26:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.813 21:26:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.813 21:26:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.813 21:26:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:38.813 21:26:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:38.813 21:26:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:38.813 21:26:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:38.813 21:26:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:38.813 21:26:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:38.813 21:26:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.813 21:26:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:38.813 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:38.813 21:26:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:38.813 21:26:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:38.813 21:26:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.813 21:26:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.813 21:26:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:38.813 21:26:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.813 21:26:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:38.813 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:38.813 21:26:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:38.813 21:26:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:38.813 21:26:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.813 21:26:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.813 21:26:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:38.813 21:26:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:38.813 21:26:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:38.813 21:26:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:38.813 21:26:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.813 21:26:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.813 21:26:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:38.813 21:26:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.813 21:26:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:38.813 Found net devices under 0000:af:00.0: cvl_0_0 00:10:38.813 21:26:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.813 21:26:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.813 21:26:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.813 21:26:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:38.813 21:26:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.813 21:26:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:38.813 Found net devices under 0000:af:00.1: cvl_0_1 00:10:38.813 21:26:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.813 21:26:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:38.813 21:26:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:38.813 21:26:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:38.813 21:26:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:38.813 21:26:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:38.813 21:26:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.813 21:26:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.813 21:26:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:38.813 21:26:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:38.813 21:26:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:38.813 21:26:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:38.813 21:26:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:38.813 21:26:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:38.813 21:26:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.813 21:26:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:38.813 21:26:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:38.813 21:26:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:38.813 21:26:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.071 21:26:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.071 21:26:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.072 21:26:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:39.072 21:26:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.072 21:26:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.072 21:26:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.072 21:26:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:39.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:10:39.072 00:10:39.072 --- 10.0.0.2 ping statistics --- 00:10:39.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.072 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:10:39.072 21:26:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:10:39.072 00:10:39.072 --- 10.0.0.1 ping statistics --- 00:10:39.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.072 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:10:39.072 21:26:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.072 21:26:01 -- nvmf/common.sh@411 -- # return 0 00:10:39.072 21:26:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:39.072 21:26:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.072 21:26:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:39.072 21:26:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:39.072 21:26:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.072 21:26:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:39.072 21:26:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:39.072 21:26:01 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:39.072 21:26:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:39.072 21:26:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:39.072 21:26:01 -- common/autotest_common.sh@10 -- # set +x 00:10:39.331 21:26:01 -- nvmf/common.sh@470 -- # nvmfpid=2763603 00:10:39.331 21:26:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:39.331 21:26:01 -- nvmf/common.sh@471 -- # waitforlisten 2763603 00:10:39.331 21:26:01 -- common/autotest_common.sh@817 -- # '[' -z 2763603 ']' 00:10:39.331 21:26:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.331 21:26:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:39.331 21:26:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.331 21:26:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:39.331 21:26:01 -- common/autotest_common.sh@10 -- # set +x 00:10:39.331 [2024-04-24 21:26:02.010542] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:10:39.331 [2024-04-24 21:26:02.010590] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.331 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.331 [2024-04-24 21:26:02.083050] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:39.331 [2024-04-24 21:26:02.149704] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.331 [2024-04-24 21:26:02.149746] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.331 [2024-04-24 21:26:02.149755] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.331 [2024-04-24 21:26:02.149764] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.331 [2024-04-24 21:26:02.149787] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.331 [2024-04-24 21:26:02.149890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.331 [2024-04-24 21:26:02.149983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.331 [2024-04-24 21:26:02.149985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.266 21:26:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:40.266 21:26:02 -- common/autotest_common.sh@850 -- # return 0 00:10:40.266 21:26:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:40.266 21:26:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:40.266 21:26:02 -- common/autotest_common.sh@10 -- # set +x 00:10:40.266 21:26:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.266 21:26:02 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:40.266 21:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:40.266 21:26:02 -- common/autotest_common.sh@10 -- # set +x 00:10:40.266 [2024-04-24 21:26:02.865948] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.266 21:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:40.266 21:26:02 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:40.266 21:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:40.266 21:26:02 -- common/autotest_common.sh@10 -- # set +x 00:10:40.266 21:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:40.266 21:26:02 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.266 21:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:40.266 21:26:02 -- common/autotest_common.sh@10 -- # set +x 00:10:40.266 [2024-04-24 21:26:02.898598] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.266 21:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:40.266 21:26:02 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:40.266 21:26:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:40.266 21:26:02 -- common/autotest_common.sh@10 -- # set +x 00:10:40.266 NULL1 00:10:40.266 21:26:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:40.266 21:26:02 -- target/connect_stress.sh@21 -- # PERF_PID=2763859 00:10:40.266 21:26:02 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:40.266 21:26:02 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:40.266 21:26:02 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:40.266 21:26:02 -- target/connect_stress.sh@27 -- # seq 1 20 00:10:40.266 21:26:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:02 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 21:26:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:02 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 21:26:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:02 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 21:26:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:02 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 21:26:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:02 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 EAL: No free 2048 kB hugepages reported on node 1 00:10:40.266 21:26:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:02 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 21:26:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:02 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 21:26:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:02 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 21:26:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:02 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 21:26:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:02 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 21:26:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:02 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 21:26:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:02 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 21:26:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:02 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 21:26:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:02 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 21:26:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:02 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 21:26:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:03 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 21:26:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:03 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 21:26:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:03 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 21:26:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:03 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 21:26:03 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:40.266 21:26:03 -- target/connect_stress.sh@28 -- # cat 00:10:40.266 21:26:03 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:40.267 21:26:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.267 21:26:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:40.267 21:26:03 -- common/autotest_common.sh@10 -- # set +x 00:10:40.524 21:26:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:40.524 21:26:03 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:40.524 21:26:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:40.524 21:26:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:40.524 21:26:03 -- common/autotest_common.sh@10 -- # set +x 00:10:41.090 21:26:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:41.090 21:26:03 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:41.090 21:26:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:41.090 21:26:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:41.090 21:26:03 -- common/autotest_common.sh@10 -- # set +x 00:10:41.359 21:26:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:41.359 21:26:03 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:41.359 21:26:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:41.359 21:26:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:41.359 21:26:03 -- common/autotest_common.sh@10 -- # set +x 00:10:41.616 21:26:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:41.616 21:26:04 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:41.616 21:26:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:41.616 21:26:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:41.616 21:26:04 -- common/autotest_common.sh@10 -- # set +x 00:10:41.873 21:26:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:41.873 21:26:04 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:41.873 21:26:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:41.873 21:26:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:41.873 21:26:04 -- common/autotest_common.sh@10 -- # set +x 00:10:42.164 21:26:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:42.164 21:26:04 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:42.164 21:26:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:42.164 21:26:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:42.164 21:26:04 -- common/autotest_common.sh@10 -- # set +x 00:10:42.421 21:26:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:42.421 21:26:05 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:42.421 21:26:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:42.421 21:26:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:42.421 21:26:05 -- common/autotest_common.sh@10 -- # set +x 00:10:42.987 21:26:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:42.987 21:26:05 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:42.987 21:26:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:42.987 21:26:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:42.987 21:26:05 -- common/autotest_common.sh@10 -- # set +x 00:10:43.245 21:26:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:43.245 21:26:05 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:43.245 21:26:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:43.245 21:26:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:43.245 21:26:05 -- common/autotest_common.sh@10 -- # set +x 00:10:43.503 21:26:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:43.503 21:26:06 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:43.503 21:26:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:43.503 21:26:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:43.503 21:26:06 -- common/autotest_common.sh@10 -- # set +x 00:10:43.761 21:26:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:43.761 21:26:06 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:43.761 21:26:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:43.761 21:26:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:43.761 21:26:06 -- common/autotest_common.sh@10 -- # set +x 00:10:44.327 21:26:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:44.327 21:26:06 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:44.327 21:26:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.327 21:26:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:44.327 21:26:06 -- common/autotest_common.sh@10 -- # set +x 00:10:44.585 21:26:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:44.585 21:26:07 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:44.585 21:26:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.585 21:26:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:44.585 21:26:07 -- common/autotest_common.sh@10 -- # set +x 00:10:44.844 21:26:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:44.844 21:26:07 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:44.844 21:26:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.844 21:26:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:44.844 21:26:07 -- common/autotest_common.sh@10 -- # set +x 00:10:45.102 21:26:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:45.102 21:26:07 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:45.102 21:26:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.102 21:26:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:45.102 21:26:07 -- common/autotest_common.sh@10 -- # set +x 00:10:45.359 21:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:45.359 21:26:08 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:45.359 21:26:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.359 21:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:45.359 21:26:08 -- common/autotest_common.sh@10 -- # set +x 00:10:45.926 21:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:45.926 21:26:08 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:45.926 21:26:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.926 21:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:45.926 21:26:08 -- common/autotest_common.sh@10 -- # set +x 00:10:46.184 21:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:46.184 21:26:08 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:46.184 21:26:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.184 21:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:46.184 21:26:08 -- common/autotest_common.sh@10 -- # set +x 00:10:46.442 21:26:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:46.442 21:26:09 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:46.442 21:26:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.442 21:26:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:46.442 21:26:09 -- common/autotest_common.sh@10 -- # set +x 00:10:46.701 21:26:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:46.701 21:26:09 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:46.701 21:26:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.701 21:26:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:46.701 21:26:09 -- common/autotest_common.sh@10 -- # set +x 00:10:47.267 21:26:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:47.267 21:26:09 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:47.267 21:26:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.267 21:26:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:47.267 21:26:09 -- common/autotest_common.sh@10 -- # set +x 00:10:47.525 21:26:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:47.525 21:26:10 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:47.525 21:26:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.525 21:26:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:47.525 21:26:10 -- common/autotest_common.sh@10 -- # set +x 00:10:47.782 21:26:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:47.782 21:26:10 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:47.782 21:26:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.782 21:26:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:47.782 21:26:10 -- common/autotest_common.sh@10 -- # set +x 00:10:48.040 21:26:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.040 21:26:10 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:48.040 21:26:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.040 21:26:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.040 21:26:10 -- common/autotest_common.sh@10 -- # set +x 00:10:48.298 21:26:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.298 21:26:11 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:48.298 21:26:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.298 21:26:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.298 21:26:11 -- common/autotest_common.sh@10 -- # set +x 00:10:48.865 21:26:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:48.865 21:26:11 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:48.865 21:26:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.865 21:26:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:48.865 21:26:11 -- common/autotest_common.sh@10 -- # set +x 00:10:49.123 21:26:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.123 21:26:11 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:49.123 21:26:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.123 21:26:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.123 21:26:11 -- common/autotest_common.sh@10 -- # set +x 00:10:49.381 21:26:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.381 21:26:12 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:49.381 21:26:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.381 21:26:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.381 21:26:12 -- common/autotest_common.sh@10 -- # set +x 00:10:49.638 21:26:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.638 21:26:12 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:49.638 21:26:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.638 21:26:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.638 21:26:12 -- common/autotest_common.sh@10 -- # set +x 00:10:49.895 21:26:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.895 21:26:12 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:49.895 21:26:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.895 21:26:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.153 21:26:12 -- common/autotest_common.sh@10 -- # set +x 00:10:50.412 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:50.412 21:26:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.412 21:26:13 -- target/connect_stress.sh@34 -- # kill -0 2763859 00:10:50.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2763859) - No such process 00:10:50.412 21:26:13 -- target/connect_stress.sh@38 -- # wait 2763859 00:10:50.412 21:26:13 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:50.412 21:26:13 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:50.412 21:26:13 -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:50.412 21:26:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:50.412 21:26:13 -- nvmf/common.sh@117 -- # sync 00:10:50.412 21:26:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:50.412 21:26:13 -- nvmf/common.sh@120 -- # set +e 00:10:50.412 21:26:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:50.412 21:26:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:50.412 rmmod nvme_tcp 00:10:50.412 rmmod nvme_fabrics 00:10:50.412 rmmod nvme_keyring 00:10:50.412 21:26:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:50.412 21:26:13 -- nvmf/common.sh@124 -- # set -e 00:10:50.412 21:26:13 -- nvmf/common.sh@125 -- # return 0 00:10:50.412 21:26:13 -- nvmf/common.sh@478 -- # '[' -n 2763603 ']' 00:10:50.412 21:26:13 -- nvmf/common.sh@479 -- # killprocess 2763603 00:10:50.412 21:26:13 -- common/autotest_common.sh@936 -- # '[' -z 2763603 ']' 00:10:50.412 21:26:13 -- common/autotest_common.sh@940 -- # kill -0 2763603 00:10:50.412 21:26:13 -- common/autotest_common.sh@941 -- # uname 00:10:50.412 21:26:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:50.412 21:26:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2763603 00:10:50.412 21:26:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:50.412 21:26:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:50.412 21:26:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2763603' 00:10:50.412 killing process with pid 2763603 00:10:50.412 21:26:13 -- common/autotest_common.sh@955 -- # kill 2763603 00:10:50.412 21:26:13 -- common/autotest_common.sh@960 -- # wait 2763603 00:10:50.672 21:26:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:50.672 21:26:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:50.672 21:26:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:50.672 21:26:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:50.672 21:26:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:50.672 21:26:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.672 21:26:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:50.672 21:26:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.207 21:26:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:53.207 00:10:53.207 real 0m20.775s 00:10:53.207 user 0m40.582s 00:10:53.207 sys 0m10.461s 00:10:53.207 21:26:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:53.207 21:26:15 -- common/autotest_common.sh@10 -- # set +x 00:10:53.207 ************************************ 00:10:53.207 END TEST nvmf_connect_stress 00:10:53.207 ************************************ 00:10:53.207 21:26:15 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:53.207 21:26:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:53.207 21:26:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:53.207 21:26:15 -- common/autotest_common.sh@10 -- # set +x 00:10:53.207 ************************************ 00:10:53.207 START TEST nvmf_fused_ordering 00:10:53.207 ************************************ 00:10:53.207 21:26:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:53.207 * Looking for test storage... 00:10:53.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.207 21:26:15 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.207 21:26:15 -- nvmf/common.sh@7 -- # uname -s 00:10:53.207 21:26:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.207 21:26:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.207 21:26:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.207 21:26:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.207 21:26:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.207 21:26:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.207 21:26:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.207 21:26:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.207 21:26:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.207 21:26:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.207 21:26:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:53.207 21:26:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:53.207 21:26:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.207 21:26:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.207 21:26:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.207 21:26:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.207 21:26:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.207 21:26:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.207 21:26:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.207 21:26:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.207 21:26:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.207 21:26:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.207 21:26:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.207 21:26:15 -- paths/export.sh@5 -- # export PATH 00:10:53.207 21:26:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.207 21:26:15 -- nvmf/common.sh@47 -- # : 0 00:10:53.207 21:26:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:53.207 21:26:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:53.207 21:26:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.207 21:26:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.207 21:26:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.207 21:26:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:53.207 21:26:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:53.207 21:26:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:53.207 21:26:15 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:53.207 21:26:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:53.207 21:26:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.207 21:26:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:53.207 21:26:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:53.207 21:26:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:53.207 21:26:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.207 21:26:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.207 21:26:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.207 21:26:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:53.207 21:26:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:53.207 21:26:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:53.207 21:26:15 -- common/autotest_common.sh@10 -- # set +x 00:10:59.769 21:26:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:59.769 21:26:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:59.769 21:26:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:59.769 21:26:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:59.769 21:26:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:59.769 21:26:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:59.769 21:26:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:59.769 21:26:22 -- nvmf/common.sh@295 -- # net_devs=() 00:10:59.769 21:26:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:59.769 21:26:22 -- nvmf/common.sh@296 -- # e810=() 00:10:59.769 21:26:22 -- nvmf/common.sh@296 -- # local -ga e810 00:10:59.769 21:26:22 -- nvmf/common.sh@297 -- # x722=() 00:10:59.769 21:26:22 -- nvmf/common.sh@297 -- # local -ga x722 00:10:59.769 21:26:22 -- nvmf/common.sh@298 -- # mlx=() 00:10:59.769 21:26:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:59.769 21:26:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:59.769 21:26:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:59.769 21:26:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:59.769 21:26:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:59.769 21:26:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:59.769 21:26:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:59.769 21:26:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:59.769 21:26:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:59.769 21:26:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:59.769 21:26:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:59.769 21:26:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:59.769 21:26:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:59.769 21:26:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:59.769 21:26:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:59.769 21:26:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:59.769 21:26:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:59.769 21:26:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:59.769 21:26:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:59.769 21:26:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:59.769 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:59.769 21:26:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:59.769 21:26:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:59.769 21:26:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.769 21:26:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.769 21:26:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:59.769 21:26:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:59.769 21:26:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:59.769 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:59.769 21:26:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:59.769 21:26:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:59.769 21:26:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.769 21:26:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.769 21:26:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:59.769 21:26:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:59.769 21:26:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:59.769 21:26:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:59.769 21:26:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:59.769 21:26:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.769 21:26:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:59.769 21:26:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.769 21:26:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:59.769 Found net devices under 0000:af:00.0: cvl_0_0 00:10:59.769 21:26:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.769 21:26:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:59.769 21:26:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.769 21:26:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:59.769 21:26:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.769 21:26:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:59.769 Found net devices under 0000:af:00.1: cvl_0_1 00:10:59.769 21:26:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.769 21:26:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:59.769 21:26:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:59.769 21:26:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:59.769 21:26:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:59.769 21:26:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:59.769 21:26:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.769 21:26:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:59.769 21:26:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:59.769 21:26:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:59.769 21:26:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:59.769 21:26:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:59.769 21:26:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:59.769 21:26:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:59.769 21:26:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.769 21:26:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:59.769 21:26:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:59.769 21:26:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:59.769 21:26:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:59.769 21:26:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:59.769 21:26:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:59.769 21:26:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:59.769 21:26:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:59.769 21:26:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:59.769 21:26:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:59.769 21:26:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:59.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:59.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:10:59.769 00:10:59.769 --- 10.0.0.2 ping statistics --- 00:10:59.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.769 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:10:59.769 21:26:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:59.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:59.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:10:59.769 00:10:59.769 --- 10.0.0.1 ping statistics --- 00:10:59.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.770 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:10:59.770 21:26:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:59.770 21:26:22 -- nvmf/common.sh@411 -- # return 0 00:10:59.770 21:26:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:59.770 21:26:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:59.770 21:26:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:59.770 21:26:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:59.770 21:26:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:59.770 21:26:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:59.770 21:26:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:59.770 21:26:22 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:59.770 21:26:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:59.770 21:26:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:59.770 21:26:22 -- common/autotest_common.sh@10 -- # set +x 00:10:59.770 21:26:22 -- nvmf/common.sh@470 -- # nvmfpid=2769229 00:10:59.770 21:26:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:59.770 21:26:22 -- nvmf/common.sh@471 -- # waitforlisten 2769229 00:10:59.770 21:26:22 -- common/autotest_common.sh@817 -- # '[' -z 2769229 ']' 00:10:59.770 21:26:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.770 21:26:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:59.770 21:26:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.770 21:26:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:59.770 21:26:22 -- common/autotest_common.sh@10 -- # set +x 00:10:59.770 [2024-04-24 21:26:22.635759] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:10:59.770 [2024-04-24 21:26:22.635807] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.028 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.028 [2024-04-24 21:26:22.710789] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.028 [2024-04-24 21:26:22.781315] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.028 [2024-04-24 21:26:22.781359] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.028 [2024-04-24 21:26:22.781369] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.028 [2024-04-24 21:26:22.781378] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.028 [2024-04-24 21:26:22.781385] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.028 [2024-04-24 21:26:22.781408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.599 21:26:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:00.599 21:26:23 -- common/autotest_common.sh@850 -- # return 0 00:11:00.599 21:26:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:00.599 21:26:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:00.599 21:26:23 -- common/autotest_common.sh@10 -- # set +x 00:11:00.599 21:26:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.599 21:26:23 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:00.599 21:26:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.599 21:26:23 -- common/autotest_common.sh@10 -- # set +x 00:11:00.599 [2024-04-24 21:26:23.478798] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.599 21:26:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.599 21:26:23 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:00.599 21:26:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.599 21:26:23 -- common/autotest_common.sh@10 -- # set +x 00:11:00.857 21:26:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.857 21:26:23 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.857 21:26:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.857 21:26:23 -- common/autotest_common.sh@10 -- # set +x 00:11:00.857 [2024-04-24 21:26:23.498982] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.857 21:26:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.857 21:26:23 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:00.857 21:26:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.857 21:26:23 -- common/autotest_common.sh@10 -- # set +x 00:11:00.857 NULL1 00:11:00.857 21:26:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.857 21:26:23 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:00.857 21:26:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.857 21:26:23 -- common/autotest_common.sh@10 -- # set +x 00:11:00.857 21:26:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.857 21:26:23 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:00.857 21:26:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.857 21:26:23 -- common/autotest_common.sh@10 -- # set +x 00:11:00.857 21:26:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.857 21:26:23 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:00.857 [2024-04-24 21:26:23.560054] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:11:00.857 [2024-04-24 21:26:23.560091] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2769489 ] 00:11:00.857 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.792 Attached to nqn.2016-06.io.spdk:cnode1 00:11:01.792 Namespace ID: 1 size: 1GB 00:11:01.792 fused_ordering(0) 00:11:01.792 fused_ordering(1) 00:11:01.792 fused_ordering(2) 00:11:01.792 fused_ordering(3) 00:11:01.792 fused_ordering(4) 00:11:01.792 fused_ordering(5) 00:11:01.792 fused_ordering(6) 00:11:01.792 fused_ordering(7) 00:11:01.792 fused_ordering(8) 00:11:01.792 fused_ordering(9) 00:11:01.792 fused_ordering(10) 00:11:01.792 fused_ordering(11) 00:11:01.792 fused_ordering(12) 00:11:01.792 fused_ordering(13) 00:11:01.792 fused_ordering(14) 00:11:01.792 fused_ordering(15) 00:11:01.792 fused_ordering(16) 00:11:01.792 fused_ordering(17) 00:11:01.792 fused_ordering(18) 00:11:01.792 fused_ordering(19) 00:11:01.792 fused_ordering(20) 00:11:01.792 fused_ordering(21) 00:11:01.792 fused_ordering(22) 00:11:01.792 fused_ordering(23) 00:11:01.792 fused_ordering(24) 00:11:01.792 fused_ordering(25) 00:11:01.792 fused_ordering(26) 00:11:01.792 fused_ordering(27) 00:11:01.792 fused_ordering(28) 00:11:01.792 fused_ordering(29) 00:11:01.792 fused_ordering(30) 00:11:01.792 fused_ordering(31) 00:11:01.792 fused_ordering(32) 00:11:01.792 fused_ordering(33) 00:11:01.792 fused_ordering(34) 00:11:01.792 fused_ordering(35) 00:11:01.792 fused_ordering(36) 00:11:01.792 fused_ordering(37) 00:11:01.792 fused_ordering(38) 00:11:01.792 fused_ordering(39) 00:11:01.792 fused_ordering(40) 00:11:01.792 fused_ordering(41) 00:11:01.792 fused_ordering(42) 00:11:01.792 fused_ordering(43) 00:11:01.792 fused_ordering(44) 00:11:01.792 fused_ordering(45) 00:11:01.792 fused_ordering(46) 00:11:01.792 fused_ordering(47) 00:11:01.792 fused_ordering(48) 00:11:01.792 fused_ordering(49) 00:11:01.792 fused_ordering(50) 00:11:01.792 fused_ordering(51) 00:11:01.792 fused_ordering(52) 00:11:01.792 fused_ordering(53) 00:11:01.792 fused_ordering(54) 00:11:01.792 fused_ordering(55) 00:11:01.792 fused_ordering(56) 00:11:01.792 fused_ordering(57) 00:11:01.792 fused_ordering(58) 00:11:01.792 fused_ordering(59) 00:11:01.792 fused_ordering(60) 00:11:01.792 fused_ordering(61) 00:11:01.792 fused_ordering(62) 00:11:01.792 fused_ordering(63) 00:11:01.792 fused_ordering(64) 00:11:01.792 fused_ordering(65) 00:11:01.792 fused_ordering(66) 00:11:01.792 fused_ordering(67) 00:11:01.792 fused_ordering(68) 00:11:01.792 fused_ordering(69) 00:11:01.792 fused_ordering(70) 00:11:01.792 fused_ordering(71) 00:11:01.792 fused_ordering(72) 00:11:01.792 fused_ordering(73) 00:11:01.792 fused_ordering(74) 00:11:01.792 fused_ordering(75) 00:11:01.792 fused_ordering(76) 00:11:01.792 fused_ordering(77) 00:11:01.792 fused_ordering(78) 00:11:01.792 fused_ordering(79) 00:11:01.792 fused_ordering(80) 00:11:01.792 fused_ordering(81) 00:11:01.792 fused_ordering(82) 00:11:01.792 fused_ordering(83) 00:11:01.792 fused_ordering(84) 00:11:01.792 fused_ordering(85) 00:11:01.792 fused_ordering(86) 00:11:01.792 fused_ordering(87) 00:11:01.792 fused_ordering(88) 00:11:01.792 fused_ordering(89) 00:11:01.792 fused_ordering(90) 00:11:01.792 fused_ordering(91) 00:11:01.792 fused_ordering(92) 00:11:01.792 fused_ordering(93) 00:11:01.792 fused_ordering(94) 00:11:01.792 fused_ordering(95) 00:11:01.792 fused_ordering(96) 00:11:01.792 fused_ordering(97) 00:11:01.792 fused_ordering(98) 00:11:01.792 fused_ordering(99) 00:11:01.792 fused_ordering(100) 00:11:01.792 fused_ordering(101) 00:11:01.792 fused_ordering(102) 00:11:01.792 fused_ordering(103) 00:11:01.792 fused_ordering(104) 00:11:01.792 fused_ordering(105) 00:11:01.792 fused_ordering(106) 00:11:01.792 fused_ordering(107) 00:11:01.792 fused_ordering(108) 00:11:01.792 fused_ordering(109) 00:11:01.792 fused_ordering(110) 00:11:01.792 fused_ordering(111) 00:11:01.792 fused_ordering(112) 00:11:01.792 fused_ordering(113) 00:11:01.792 fused_ordering(114) 00:11:01.792 fused_ordering(115) 00:11:01.792 fused_ordering(116) 00:11:01.792 fused_ordering(117) 00:11:01.792 fused_ordering(118) 00:11:01.792 fused_ordering(119) 00:11:01.792 fused_ordering(120) 00:11:01.792 fused_ordering(121) 00:11:01.792 fused_ordering(122) 00:11:01.792 fused_ordering(123) 00:11:01.792 fused_ordering(124) 00:11:01.792 fused_ordering(125) 00:11:01.792 fused_ordering(126) 00:11:01.792 fused_ordering(127) 00:11:01.792 fused_ordering(128) 00:11:01.792 fused_ordering(129) 00:11:01.792 fused_ordering(130) 00:11:01.792 fused_ordering(131) 00:11:01.792 fused_ordering(132) 00:11:01.792 fused_ordering(133) 00:11:01.792 fused_ordering(134) 00:11:01.792 fused_ordering(135) 00:11:01.792 fused_ordering(136) 00:11:01.792 fused_ordering(137) 00:11:01.792 fused_ordering(138) 00:11:01.792 fused_ordering(139) 00:11:01.792 fused_ordering(140) 00:11:01.792 fused_ordering(141) 00:11:01.792 fused_ordering(142) 00:11:01.792 fused_ordering(143) 00:11:01.792 fused_ordering(144) 00:11:01.792 fused_ordering(145) 00:11:01.792 fused_ordering(146) 00:11:01.792 fused_ordering(147) 00:11:01.792 fused_ordering(148) 00:11:01.792 fused_ordering(149) 00:11:01.792 fused_ordering(150) 00:11:01.792 fused_ordering(151) 00:11:01.792 fused_ordering(152) 00:11:01.792 fused_ordering(153) 00:11:01.792 fused_ordering(154) 00:11:01.792 fused_ordering(155) 00:11:01.792 fused_ordering(156) 00:11:01.792 fused_ordering(157) 00:11:01.792 fused_ordering(158) 00:11:01.792 fused_ordering(159) 00:11:01.792 fused_ordering(160) 00:11:01.792 fused_ordering(161) 00:11:01.792 fused_ordering(162) 00:11:01.792 fused_ordering(163) 00:11:01.792 fused_ordering(164) 00:11:01.792 fused_ordering(165) 00:11:01.792 fused_ordering(166) 00:11:01.792 fused_ordering(167) 00:11:01.792 fused_ordering(168) 00:11:01.792 fused_ordering(169) 00:11:01.792 fused_ordering(170) 00:11:01.792 fused_ordering(171) 00:11:01.792 fused_ordering(172) 00:11:01.792 fused_ordering(173) 00:11:01.792 fused_ordering(174) 00:11:01.792 fused_ordering(175) 00:11:01.792 fused_ordering(176) 00:11:01.792 fused_ordering(177) 00:11:01.792 fused_ordering(178) 00:11:01.792 fused_ordering(179) 00:11:01.792 fused_ordering(180) 00:11:01.792 fused_ordering(181) 00:11:01.792 fused_ordering(182) 00:11:01.792 fused_ordering(183) 00:11:01.793 fused_ordering(184) 00:11:01.793 fused_ordering(185) 00:11:01.793 fused_ordering(186) 00:11:01.793 fused_ordering(187) 00:11:01.793 fused_ordering(188) 00:11:01.793 fused_ordering(189) 00:11:01.793 fused_ordering(190) 00:11:01.793 fused_ordering(191) 00:11:01.793 fused_ordering(192) 00:11:01.793 fused_ordering(193) 00:11:01.793 fused_ordering(194) 00:11:01.793 fused_ordering(195) 00:11:01.793 fused_ordering(196) 00:11:01.793 fused_ordering(197) 00:11:01.793 fused_ordering(198) 00:11:01.793 fused_ordering(199) 00:11:01.793 fused_ordering(200) 00:11:01.793 fused_ordering(201) 00:11:01.793 fused_ordering(202) 00:11:01.793 fused_ordering(203) 00:11:01.793 fused_ordering(204) 00:11:01.793 fused_ordering(205) 00:11:02.359 fused_ordering(206) 00:11:02.359 fused_ordering(207) 00:11:02.359 fused_ordering(208) 00:11:02.359 fused_ordering(209) 00:11:02.359 fused_ordering(210) 00:11:02.359 fused_ordering(211) 00:11:02.359 fused_ordering(212) 00:11:02.359 fused_ordering(213) 00:11:02.359 fused_ordering(214) 00:11:02.359 fused_ordering(215) 00:11:02.359 fused_ordering(216) 00:11:02.359 fused_ordering(217) 00:11:02.359 fused_ordering(218) 00:11:02.359 fused_ordering(219) 00:11:02.359 fused_ordering(220) 00:11:02.359 fused_ordering(221) 00:11:02.359 fused_ordering(222) 00:11:02.359 fused_ordering(223) 00:11:02.359 fused_ordering(224) 00:11:02.359 fused_ordering(225) 00:11:02.359 fused_ordering(226) 00:11:02.359 fused_ordering(227) 00:11:02.359 fused_ordering(228) 00:11:02.359 fused_ordering(229) 00:11:02.359 fused_ordering(230) 00:11:02.359 fused_ordering(231) 00:11:02.359 fused_ordering(232) 00:11:02.359 fused_ordering(233) 00:11:02.359 fused_ordering(234) 00:11:02.359 fused_ordering(235) 00:11:02.359 fused_ordering(236) 00:11:02.359 fused_ordering(237) 00:11:02.359 fused_ordering(238) 00:11:02.359 fused_ordering(239) 00:11:02.359 fused_ordering(240) 00:11:02.359 fused_ordering(241) 00:11:02.359 fused_ordering(242) 00:11:02.359 fused_ordering(243) 00:11:02.359 fused_ordering(244) 00:11:02.359 fused_ordering(245) 00:11:02.359 fused_ordering(246) 00:11:02.359 fused_ordering(247) 00:11:02.359 fused_ordering(248) 00:11:02.359 fused_ordering(249) 00:11:02.359 fused_ordering(250) 00:11:02.359 fused_ordering(251) 00:11:02.359 fused_ordering(252) 00:11:02.359 fused_ordering(253) 00:11:02.359 fused_ordering(254) 00:11:02.359 fused_ordering(255) 00:11:02.359 fused_ordering(256) 00:11:02.359 fused_ordering(257) 00:11:02.359 fused_ordering(258) 00:11:02.359 fused_ordering(259) 00:11:02.359 fused_ordering(260) 00:11:02.359 fused_ordering(261) 00:11:02.359 fused_ordering(262) 00:11:02.359 fused_ordering(263) 00:11:02.359 fused_ordering(264) 00:11:02.359 fused_ordering(265) 00:11:02.359 fused_ordering(266) 00:11:02.359 fused_ordering(267) 00:11:02.359 fused_ordering(268) 00:11:02.359 fused_ordering(269) 00:11:02.359 fused_ordering(270) 00:11:02.359 fused_ordering(271) 00:11:02.359 fused_ordering(272) 00:11:02.359 fused_ordering(273) 00:11:02.359 fused_ordering(274) 00:11:02.359 fused_ordering(275) 00:11:02.359 fused_ordering(276) 00:11:02.359 fused_ordering(277) 00:11:02.359 fused_ordering(278) 00:11:02.359 fused_ordering(279) 00:11:02.359 fused_ordering(280) 00:11:02.359 fused_ordering(281) 00:11:02.359 fused_ordering(282) 00:11:02.359 fused_ordering(283) 00:11:02.359 fused_ordering(284) 00:11:02.359 fused_ordering(285) 00:11:02.359 fused_ordering(286) 00:11:02.359 fused_ordering(287) 00:11:02.359 fused_ordering(288) 00:11:02.359 fused_ordering(289) 00:11:02.359 fused_ordering(290) 00:11:02.359 fused_ordering(291) 00:11:02.359 fused_ordering(292) 00:11:02.359 fused_ordering(293) 00:11:02.359 fused_ordering(294) 00:11:02.359 fused_ordering(295) 00:11:02.359 fused_ordering(296) 00:11:02.359 fused_ordering(297) 00:11:02.359 fused_ordering(298) 00:11:02.359 fused_ordering(299) 00:11:02.359 fused_ordering(300) 00:11:02.359 fused_ordering(301) 00:11:02.359 fused_ordering(302) 00:11:02.359 fused_ordering(303) 00:11:02.359 fused_ordering(304) 00:11:02.359 fused_ordering(305) 00:11:02.359 fused_ordering(306) 00:11:02.359 fused_ordering(307) 00:11:02.359 fused_ordering(308) 00:11:02.359 fused_ordering(309) 00:11:02.359 fused_ordering(310) 00:11:02.359 fused_ordering(311) 00:11:02.359 fused_ordering(312) 00:11:02.359 fused_ordering(313) 00:11:02.359 fused_ordering(314) 00:11:02.359 fused_ordering(315) 00:11:02.359 fused_ordering(316) 00:11:02.359 fused_ordering(317) 00:11:02.359 fused_ordering(318) 00:11:02.359 fused_ordering(319) 00:11:02.359 fused_ordering(320) 00:11:02.359 fused_ordering(321) 00:11:02.359 fused_ordering(322) 00:11:02.359 fused_ordering(323) 00:11:02.359 fused_ordering(324) 00:11:02.359 fused_ordering(325) 00:11:02.359 fused_ordering(326) 00:11:02.359 fused_ordering(327) 00:11:02.359 fused_ordering(328) 00:11:02.359 fused_ordering(329) 00:11:02.359 fused_ordering(330) 00:11:02.359 fused_ordering(331) 00:11:02.359 fused_ordering(332) 00:11:02.359 fused_ordering(333) 00:11:02.359 fused_ordering(334) 00:11:02.359 fused_ordering(335) 00:11:02.359 fused_ordering(336) 00:11:02.359 fused_ordering(337) 00:11:02.359 fused_ordering(338) 00:11:02.359 fused_ordering(339) 00:11:02.359 fused_ordering(340) 00:11:02.359 fused_ordering(341) 00:11:02.359 fused_ordering(342) 00:11:02.359 fused_ordering(343) 00:11:02.359 fused_ordering(344) 00:11:02.359 fused_ordering(345) 00:11:02.359 fused_ordering(346) 00:11:02.359 fused_ordering(347) 00:11:02.359 fused_ordering(348) 00:11:02.359 fused_ordering(349) 00:11:02.359 fused_ordering(350) 00:11:02.359 fused_ordering(351) 00:11:02.359 fused_ordering(352) 00:11:02.359 fused_ordering(353) 00:11:02.359 fused_ordering(354) 00:11:02.359 fused_ordering(355) 00:11:02.359 fused_ordering(356) 00:11:02.359 fused_ordering(357) 00:11:02.359 fused_ordering(358) 00:11:02.359 fused_ordering(359) 00:11:02.359 fused_ordering(360) 00:11:02.359 fused_ordering(361) 00:11:02.359 fused_ordering(362) 00:11:02.359 fused_ordering(363) 00:11:02.359 fused_ordering(364) 00:11:02.359 fused_ordering(365) 00:11:02.359 fused_ordering(366) 00:11:02.359 fused_ordering(367) 00:11:02.359 fused_ordering(368) 00:11:02.359 fused_ordering(369) 00:11:02.359 fused_ordering(370) 00:11:02.359 fused_ordering(371) 00:11:02.359 fused_ordering(372) 00:11:02.359 fused_ordering(373) 00:11:02.359 fused_ordering(374) 00:11:02.359 fused_ordering(375) 00:11:02.359 fused_ordering(376) 00:11:02.359 fused_ordering(377) 00:11:02.359 fused_ordering(378) 00:11:02.359 fused_ordering(379) 00:11:02.359 fused_ordering(380) 00:11:02.359 fused_ordering(381) 00:11:02.359 fused_ordering(382) 00:11:02.359 fused_ordering(383) 00:11:02.359 fused_ordering(384) 00:11:02.359 fused_ordering(385) 00:11:02.359 fused_ordering(386) 00:11:02.359 fused_ordering(387) 00:11:02.359 fused_ordering(388) 00:11:02.359 fused_ordering(389) 00:11:02.359 fused_ordering(390) 00:11:02.359 fused_ordering(391) 00:11:02.359 fused_ordering(392) 00:11:02.359 fused_ordering(393) 00:11:02.359 fused_ordering(394) 00:11:02.359 fused_ordering(395) 00:11:02.359 fused_ordering(396) 00:11:02.359 fused_ordering(397) 00:11:02.359 fused_ordering(398) 00:11:02.359 fused_ordering(399) 00:11:02.359 fused_ordering(400) 00:11:02.359 fused_ordering(401) 00:11:02.359 fused_ordering(402) 00:11:02.359 fused_ordering(403) 00:11:02.359 fused_ordering(404) 00:11:02.359 fused_ordering(405) 00:11:02.359 fused_ordering(406) 00:11:02.359 fused_ordering(407) 00:11:02.359 fused_ordering(408) 00:11:02.359 fused_ordering(409) 00:11:02.359 fused_ordering(410) 00:11:03.293 fused_ordering(411) 00:11:03.293 fused_ordering(412) 00:11:03.293 fused_ordering(413) 00:11:03.293 fused_ordering(414) 00:11:03.293 fused_ordering(415) 00:11:03.293 fused_ordering(416) 00:11:03.293 fused_ordering(417) 00:11:03.293 fused_ordering(418) 00:11:03.293 fused_ordering(419) 00:11:03.293 fused_ordering(420) 00:11:03.293 fused_ordering(421) 00:11:03.293 fused_ordering(422) 00:11:03.293 fused_ordering(423) 00:11:03.293 fused_ordering(424) 00:11:03.293 fused_ordering(425) 00:11:03.293 fused_ordering(426) 00:11:03.293 fused_ordering(427) 00:11:03.293 fused_ordering(428) 00:11:03.293 fused_ordering(429) 00:11:03.293 fused_ordering(430) 00:11:03.293 fused_ordering(431) 00:11:03.293 fused_ordering(432) 00:11:03.293 fused_ordering(433) 00:11:03.293 fused_ordering(434) 00:11:03.293 fused_ordering(435) 00:11:03.293 fused_ordering(436) 00:11:03.293 fused_ordering(437) 00:11:03.293 fused_ordering(438) 00:11:03.294 fused_ordering(439) 00:11:03.294 fused_ordering(440) 00:11:03.294 fused_ordering(441) 00:11:03.294 fused_ordering(442) 00:11:03.294 fused_ordering(443) 00:11:03.294 fused_ordering(444) 00:11:03.294 fused_ordering(445) 00:11:03.294 fused_ordering(446) 00:11:03.294 fused_ordering(447) 00:11:03.294 fused_ordering(448) 00:11:03.294 fused_ordering(449) 00:11:03.294 fused_ordering(450) 00:11:03.294 fused_ordering(451) 00:11:03.294 fused_ordering(452) 00:11:03.294 fused_ordering(453) 00:11:03.294 fused_ordering(454) 00:11:03.294 fused_ordering(455) 00:11:03.294 fused_ordering(456) 00:11:03.294 fused_ordering(457) 00:11:03.294 fused_ordering(458) 00:11:03.294 fused_ordering(459) 00:11:03.294 fused_ordering(460) 00:11:03.294 fused_ordering(461) 00:11:03.294 fused_ordering(462) 00:11:03.294 fused_ordering(463) 00:11:03.294 fused_ordering(464) 00:11:03.294 fused_ordering(465) 00:11:03.294 fused_ordering(466) 00:11:03.294 fused_ordering(467) 00:11:03.294 fused_ordering(468) 00:11:03.294 fused_ordering(469) 00:11:03.294 fused_ordering(470) 00:11:03.294 fused_ordering(471) 00:11:03.294 fused_ordering(472) 00:11:03.294 fused_ordering(473) 00:11:03.294 fused_ordering(474) 00:11:03.294 fused_ordering(475) 00:11:03.294 fused_ordering(476) 00:11:03.294 fused_ordering(477) 00:11:03.294 fused_ordering(478) 00:11:03.294 fused_ordering(479) 00:11:03.294 fused_ordering(480) 00:11:03.294 fused_ordering(481) 00:11:03.294 fused_ordering(482) 00:11:03.294 fused_ordering(483) 00:11:03.294 fused_ordering(484) 00:11:03.294 fused_ordering(485) 00:11:03.294 fused_ordering(486) 00:11:03.294 fused_ordering(487) 00:11:03.294 fused_ordering(488) 00:11:03.294 fused_ordering(489) 00:11:03.294 fused_ordering(490) 00:11:03.294 fused_ordering(491) 00:11:03.294 fused_ordering(492) 00:11:03.294 fused_ordering(493) 00:11:03.294 fused_ordering(494) 00:11:03.294 fused_ordering(495) 00:11:03.294 fused_ordering(496) 00:11:03.294 fused_ordering(497) 00:11:03.294 fused_ordering(498) 00:11:03.294 fused_ordering(499) 00:11:03.294 fused_ordering(500) 00:11:03.294 fused_ordering(501) 00:11:03.294 fused_ordering(502) 00:11:03.294 fused_ordering(503) 00:11:03.294 fused_ordering(504) 00:11:03.294 fused_ordering(505) 00:11:03.294 fused_ordering(506) 00:11:03.294 fused_ordering(507) 00:11:03.294 fused_ordering(508) 00:11:03.294 fused_ordering(509) 00:11:03.294 fused_ordering(510) 00:11:03.294 fused_ordering(511) 00:11:03.294 fused_ordering(512) 00:11:03.294 fused_ordering(513) 00:11:03.294 fused_ordering(514) 00:11:03.294 fused_ordering(515) 00:11:03.294 fused_ordering(516) 00:11:03.294 fused_ordering(517) 00:11:03.294 fused_ordering(518) 00:11:03.294 fused_ordering(519) 00:11:03.294 fused_ordering(520) 00:11:03.294 fused_ordering(521) 00:11:03.294 fused_ordering(522) 00:11:03.294 fused_ordering(523) 00:11:03.294 fused_ordering(524) 00:11:03.294 fused_ordering(525) 00:11:03.294 fused_ordering(526) 00:11:03.294 fused_ordering(527) 00:11:03.294 fused_ordering(528) 00:11:03.294 fused_ordering(529) 00:11:03.294 fused_ordering(530) 00:11:03.294 fused_ordering(531) 00:11:03.294 fused_ordering(532) 00:11:03.294 fused_ordering(533) 00:11:03.294 fused_ordering(534) 00:11:03.294 fused_ordering(535) 00:11:03.294 fused_ordering(536) 00:11:03.294 fused_ordering(537) 00:11:03.294 fused_ordering(538) 00:11:03.294 fused_ordering(539) 00:11:03.294 fused_ordering(540) 00:11:03.294 fused_ordering(541) 00:11:03.294 fused_ordering(542) 00:11:03.294 fused_ordering(543) 00:11:03.294 fused_ordering(544) 00:11:03.294 fused_ordering(545) 00:11:03.294 fused_ordering(546) 00:11:03.294 fused_ordering(547) 00:11:03.294 fused_ordering(548) 00:11:03.294 fused_ordering(549) 00:11:03.294 fused_ordering(550) 00:11:03.294 fused_ordering(551) 00:11:03.294 fused_ordering(552) 00:11:03.294 fused_ordering(553) 00:11:03.294 fused_ordering(554) 00:11:03.294 fused_ordering(555) 00:11:03.294 fused_ordering(556) 00:11:03.294 fused_ordering(557) 00:11:03.294 fused_ordering(558) 00:11:03.294 fused_ordering(559) 00:11:03.294 fused_ordering(560) 00:11:03.294 fused_ordering(561) 00:11:03.294 fused_ordering(562) 00:11:03.294 fused_ordering(563) 00:11:03.294 fused_ordering(564) 00:11:03.294 fused_ordering(565) 00:11:03.294 fused_ordering(566) 00:11:03.294 fused_ordering(567) 00:11:03.294 fused_ordering(568) 00:11:03.294 fused_ordering(569) 00:11:03.294 fused_ordering(570) 00:11:03.294 fused_ordering(571) 00:11:03.294 fused_ordering(572) 00:11:03.294 fused_ordering(573) 00:11:03.294 fused_ordering(574) 00:11:03.294 fused_ordering(575) 00:11:03.294 fused_ordering(576) 00:11:03.294 fused_ordering(577) 00:11:03.294 fused_ordering(578) 00:11:03.294 fused_ordering(579) 00:11:03.294 fused_ordering(580) 00:11:03.294 fused_ordering(581) 00:11:03.294 fused_ordering(582) 00:11:03.294 fused_ordering(583) 00:11:03.294 fused_ordering(584) 00:11:03.294 fused_ordering(585) 00:11:03.294 fused_ordering(586) 00:11:03.294 fused_ordering(587) 00:11:03.294 fused_ordering(588) 00:11:03.294 fused_ordering(589) 00:11:03.294 fused_ordering(590) 00:11:03.294 fused_ordering(591) 00:11:03.294 fused_ordering(592) 00:11:03.294 fused_ordering(593) 00:11:03.294 fused_ordering(594) 00:11:03.294 fused_ordering(595) 00:11:03.294 fused_ordering(596) 00:11:03.294 fused_ordering(597) 00:11:03.294 fused_ordering(598) 00:11:03.294 fused_ordering(599) 00:11:03.294 fused_ordering(600) 00:11:03.294 fused_ordering(601) 00:11:03.294 fused_ordering(602) 00:11:03.294 fused_ordering(603) 00:11:03.294 fused_ordering(604) 00:11:03.294 fused_ordering(605) 00:11:03.294 fused_ordering(606) 00:11:03.294 fused_ordering(607) 00:11:03.294 fused_ordering(608) 00:11:03.294 fused_ordering(609) 00:11:03.294 fused_ordering(610) 00:11:03.294 fused_ordering(611) 00:11:03.294 fused_ordering(612) 00:11:03.294 fused_ordering(613) 00:11:03.294 fused_ordering(614) 00:11:03.294 fused_ordering(615) 00:11:04.227 fused_ordering(616) 00:11:04.227 fused_ordering(617) 00:11:04.227 fused_ordering(618) 00:11:04.227 fused_ordering(619) 00:11:04.227 fused_ordering(620) 00:11:04.227 fused_ordering(621) 00:11:04.227 fused_ordering(622) 00:11:04.227 fused_ordering(623) 00:11:04.227 fused_ordering(624) 00:11:04.227 fused_ordering(625) 00:11:04.227 fused_ordering(626) 00:11:04.227 fused_ordering(627) 00:11:04.227 fused_ordering(628) 00:11:04.227 fused_ordering(629) 00:11:04.227 fused_ordering(630) 00:11:04.227 fused_ordering(631) 00:11:04.227 fused_ordering(632) 00:11:04.227 fused_ordering(633) 00:11:04.227 fused_ordering(634) 00:11:04.227 fused_ordering(635) 00:11:04.227 fused_ordering(636) 00:11:04.227 fused_ordering(637) 00:11:04.227 fused_ordering(638) 00:11:04.227 fused_ordering(639) 00:11:04.227 fused_ordering(640) 00:11:04.227 fused_ordering(641) 00:11:04.227 fused_ordering(642) 00:11:04.227 fused_ordering(643) 00:11:04.227 fused_ordering(644) 00:11:04.227 fused_ordering(645) 00:11:04.227 fused_ordering(646) 00:11:04.227 fused_ordering(647) 00:11:04.227 fused_ordering(648) 00:11:04.227 fused_ordering(649) 00:11:04.227 fused_ordering(650) 00:11:04.228 fused_ordering(651) 00:11:04.228 fused_ordering(652) 00:11:04.228 fused_ordering(653) 00:11:04.228 fused_ordering(654) 00:11:04.228 fused_ordering(655) 00:11:04.228 fused_ordering(656) 00:11:04.228 fused_ordering(657) 00:11:04.228 fused_ordering(658) 00:11:04.228 fused_ordering(659) 00:11:04.228 fused_ordering(660) 00:11:04.228 fused_ordering(661) 00:11:04.228 fused_ordering(662) 00:11:04.228 fused_ordering(663) 00:11:04.228 fused_ordering(664) 00:11:04.228 fused_ordering(665) 00:11:04.228 fused_ordering(666) 00:11:04.228 fused_ordering(667) 00:11:04.228 fused_ordering(668) 00:11:04.228 fused_ordering(669) 00:11:04.228 fused_ordering(670) 00:11:04.228 fused_ordering(671) 00:11:04.228 fused_ordering(672) 00:11:04.228 fused_ordering(673) 00:11:04.228 fused_ordering(674) 00:11:04.228 fused_ordering(675) 00:11:04.228 fused_ordering(676) 00:11:04.228 fused_ordering(677) 00:11:04.228 fused_ordering(678) 00:11:04.228 fused_ordering(679) 00:11:04.228 fused_ordering(680) 00:11:04.228 fused_ordering(681) 00:11:04.228 fused_ordering(682) 00:11:04.228 fused_ordering(683) 00:11:04.228 fused_ordering(684) 00:11:04.228 fused_ordering(685) 00:11:04.228 fused_ordering(686) 00:11:04.228 fused_ordering(687) 00:11:04.228 fused_ordering(688) 00:11:04.228 fused_ordering(689) 00:11:04.228 fused_ordering(690) 00:11:04.228 fused_ordering(691) 00:11:04.228 fused_ordering(692) 00:11:04.228 fused_ordering(693) 00:11:04.228 fused_ordering(694) 00:11:04.228 fused_ordering(695) 00:11:04.228 fused_ordering(696) 00:11:04.228 fused_ordering(697) 00:11:04.228 fused_ordering(698) 00:11:04.228 fused_ordering(699) 00:11:04.228 fused_ordering(700) 00:11:04.228 fused_ordering(701) 00:11:04.228 fused_ordering(702) 00:11:04.228 fused_ordering(703) 00:11:04.228 fused_ordering(704) 00:11:04.228 fused_ordering(705) 00:11:04.228 fused_ordering(706) 00:11:04.228 fused_ordering(707) 00:11:04.228 fused_ordering(708) 00:11:04.228 fused_ordering(709) 00:11:04.228 fused_ordering(710) 00:11:04.228 fused_ordering(711) 00:11:04.228 fused_ordering(712) 00:11:04.228 fused_ordering(713) 00:11:04.228 fused_ordering(714) 00:11:04.228 fused_ordering(715) 00:11:04.228 fused_ordering(716) 00:11:04.228 fused_ordering(717) 00:11:04.228 fused_ordering(718) 00:11:04.228 fused_ordering(719) 00:11:04.228 fused_ordering(720) 00:11:04.228 fused_ordering(721) 00:11:04.228 fused_ordering(722) 00:11:04.228 fused_ordering(723) 00:11:04.228 fused_ordering(724) 00:11:04.228 fused_ordering(725) 00:11:04.228 fused_ordering(726) 00:11:04.228 fused_ordering(727) 00:11:04.228 fused_ordering(728) 00:11:04.228 fused_ordering(729) 00:11:04.228 fused_ordering(730) 00:11:04.228 fused_ordering(731) 00:11:04.228 fused_ordering(732) 00:11:04.228 fused_ordering(733) 00:11:04.228 fused_ordering(734) 00:11:04.228 fused_ordering(735) 00:11:04.228 fused_ordering(736) 00:11:04.228 fused_ordering(737) 00:11:04.228 fused_ordering(738) 00:11:04.228 fused_ordering(739) 00:11:04.228 fused_ordering(740) 00:11:04.228 fused_ordering(741) 00:11:04.228 fused_ordering(742) 00:11:04.228 fused_ordering(743) 00:11:04.228 fused_ordering(744) 00:11:04.228 fused_ordering(745) 00:11:04.228 fused_ordering(746) 00:11:04.228 fused_ordering(747) 00:11:04.228 fused_ordering(748) 00:11:04.228 fused_ordering(749) 00:11:04.228 fused_ordering(750) 00:11:04.228 fused_ordering(751) 00:11:04.228 fused_ordering(752) 00:11:04.228 fused_ordering(753) 00:11:04.228 fused_ordering(754) 00:11:04.228 fused_ordering(755) 00:11:04.228 fused_ordering(756) 00:11:04.228 fused_ordering(757) 00:11:04.228 fused_ordering(758) 00:11:04.228 fused_ordering(759) 00:11:04.228 fused_ordering(760) 00:11:04.228 fused_ordering(761) 00:11:04.228 fused_ordering(762) 00:11:04.228 fused_ordering(763) 00:11:04.228 fused_ordering(764) 00:11:04.228 fused_ordering(765) 00:11:04.228 fused_ordering(766) 00:11:04.228 fused_ordering(767) 00:11:04.228 fused_ordering(768) 00:11:04.228 fused_ordering(769) 00:11:04.228 fused_ordering(770) 00:11:04.228 fused_ordering(771) 00:11:04.228 fused_ordering(772) 00:11:04.228 fused_ordering(773) 00:11:04.228 fused_ordering(774) 00:11:04.228 fused_ordering(775) 00:11:04.228 fused_ordering(776) 00:11:04.228 fused_ordering(777) 00:11:04.228 fused_ordering(778) 00:11:04.228 fused_ordering(779) 00:11:04.228 fused_ordering(780) 00:11:04.228 fused_ordering(781) 00:11:04.228 fused_ordering(782) 00:11:04.228 fused_ordering(783) 00:11:04.228 fused_ordering(784) 00:11:04.228 fused_ordering(785) 00:11:04.228 fused_ordering(786) 00:11:04.228 fused_ordering(787) 00:11:04.228 fused_ordering(788) 00:11:04.228 fused_ordering(789) 00:11:04.228 fused_ordering(790) 00:11:04.228 fused_ordering(791) 00:11:04.228 fused_ordering(792) 00:11:04.228 fused_ordering(793) 00:11:04.228 fused_ordering(794) 00:11:04.228 fused_ordering(795) 00:11:04.228 fused_ordering(796) 00:11:04.228 fused_ordering(797) 00:11:04.228 fused_ordering(798) 00:11:04.228 fused_ordering(799) 00:11:04.228 fused_ordering(800) 00:11:04.228 fused_ordering(801) 00:11:04.228 fused_ordering(802) 00:11:04.228 fused_ordering(803) 00:11:04.228 fused_ordering(804) 00:11:04.228 fused_ordering(805) 00:11:04.228 fused_ordering(806) 00:11:04.228 fused_ordering(807) 00:11:04.228 fused_ordering(808) 00:11:04.228 fused_ordering(809) 00:11:04.228 fused_ordering(810) 00:11:04.228 fused_ordering(811) 00:11:04.228 fused_ordering(812) 00:11:04.228 fused_ordering(813) 00:11:04.228 fused_ordering(814) 00:11:04.228 fused_ordering(815) 00:11:04.228 fused_ordering(816) 00:11:04.228 fused_ordering(817) 00:11:04.228 fused_ordering(818) 00:11:04.228 fused_ordering(819) 00:11:04.228 fused_ordering(820) 00:11:05.161 fused_ordering(821) 00:11:05.161 fused_ordering(822) 00:11:05.161 fused_ordering(823) 00:11:05.161 fused_ordering(824) 00:11:05.161 fused_ordering(825) 00:11:05.161 fused_ordering(826) 00:11:05.161 fused_ordering(827) 00:11:05.161 fused_ordering(828) 00:11:05.161 fused_ordering(829) 00:11:05.161 fused_ordering(830) 00:11:05.161 fused_ordering(831) 00:11:05.161 fused_ordering(832) 00:11:05.161 fused_ordering(833) 00:11:05.161 fused_ordering(834) 00:11:05.161 fused_ordering(835) 00:11:05.161 fused_ordering(836) 00:11:05.161 fused_ordering(837) 00:11:05.161 fused_ordering(838) 00:11:05.161 fused_ordering(839) 00:11:05.161 fused_ordering(840) 00:11:05.161 fused_ordering(841) 00:11:05.161 fused_ordering(842) 00:11:05.161 fused_ordering(843) 00:11:05.161 fused_ordering(844) 00:11:05.161 fused_ordering(845) 00:11:05.161 fused_ordering(846) 00:11:05.161 fused_ordering(847) 00:11:05.161 fused_ordering(848) 00:11:05.161 fused_ordering(849) 00:11:05.161 fused_ordering(850) 00:11:05.161 fused_ordering(851) 00:11:05.161 fused_ordering(852) 00:11:05.161 fused_ordering(853) 00:11:05.161 fused_ordering(854) 00:11:05.161 fused_ordering(855) 00:11:05.161 fused_ordering(856) 00:11:05.161 fused_ordering(857) 00:11:05.161 fused_ordering(858) 00:11:05.161 fused_ordering(859) 00:11:05.161 fused_ordering(860) 00:11:05.161 fused_ordering(861) 00:11:05.161 fused_ordering(862) 00:11:05.161 fused_ordering(863) 00:11:05.161 fused_ordering(864) 00:11:05.161 fused_ordering(865) 00:11:05.161 fused_ordering(866) 00:11:05.161 fused_ordering(867) 00:11:05.161 fused_ordering(868) 00:11:05.161 fused_ordering(869) 00:11:05.161 fused_ordering(870) 00:11:05.161 fused_ordering(871) 00:11:05.161 fused_ordering(872) 00:11:05.161 fused_ordering(873) 00:11:05.161 fused_ordering(874) 00:11:05.161 fused_ordering(875) 00:11:05.161 fused_ordering(876) 00:11:05.161 fused_ordering(877) 00:11:05.161 fused_ordering(878) 00:11:05.161 fused_ordering(879) 00:11:05.161 fused_ordering(880) 00:11:05.161 fused_ordering(881) 00:11:05.161 fused_ordering(882) 00:11:05.161 fused_ordering(883) 00:11:05.161 fused_ordering(884) 00:11:05.161 fused_ordering(885) 00:11:05.161 fused_ordering(886) 00:11:05.161 fused_ordering(887) 00:11:05.161 fused_ordering(888) 00:11:05.161 fused_ordering(889) 00:11:05.161 fused_ordering(890) 00:11:05.161 fused_ordering(891) 00:11:05.161 fused_ordering(892) 00:11:05.161 fused_ordering(893) 00:11:05.161 fused_ordering(894) 00:11:05.161 fused_ordering(895) 00:11:05.161 fused_ordering(896) 00:11:05.161 fused_ordering(897) 00:11:05.161 fused_ordering(898) 00:11:05.161 fused_ordering(899) 00:11:05.161 fused_ordering(900) 00:11:05.161 fused_ordering(901) 00:11:05.161 fused_ordering(902) 00:11:05.161 fused_ordering(903) 00:11:05.161 fused_ordering(904) 00:11:05.161 fused_ordering(905) 00:11:05.161 fused_ordering(906) 00:11:05.161 fused_ordering(907) 00:11:05.161 fused_ordering(908) 00:11:05.161 fused_ordering(909) 00:11:05.161 fused_ordering(910) 00:11:05.161 fused_ordering(911) 00:11:05.161 fused_ordering(912) 00:11:05.161 fused_ordering(913) 00:11:05.161 fused_ordering(914) 00:11:05.161 fused_ordering(915) 00:11:05.161 fused_ordering(916) 00:11:05.161 fused_ordering(917) 00:11:05.161 fused_ordering(918) 00:11:05.161 fused_ordering(919) 00:11:05.161 fused_ordering(920) 00:11:05.161 fused_ordering(921) 00:11:05.161 fused_ordering(922) 00:11:05.161 fused_ordering(923) 00:11:05.161 fused_ordering(924) 00:11:05.161 fused_ordering(925) 00:11:05.161 fused_ordering(926) 00:11:05.161 fused_ordering(927) 00:11:05.161 fused_ordering(928) 00:11:05.161 fused_ordering(929) 00:11:05.161 fused_ordering(930) 00:11:05.161 fused_ordering(931) 00:11:05.161 fused_ordering(932) 00:11:05.161 fused_ordering(933) 00:11:05.161 fused_ordering(934) 00:11:05.161 fused_ordering(935) 00:11:05.161 fused_ordering(936) 00:11:05.161 fused_ordering(937) 00:11:05.161 fused_ordering(938) 00:11:05.161 fused_ordering(939) 00:11:05.161 fused_ordering(940) 00:11:05.161 fused_ordering(941) 00:11:05.161 fused_ordering(942) 00:11:05.161 fused_ordering(943) 00:11:05.161 fused_ordering(944) 00:11:05.161 fused_ordering(945) 00:11:05.161 fused_ordering(946) 00:11:05.161 fused_ordering(947) 00:11:05.161 fused_ordering(948) 00:11:05.161 fused_ordering(949) 00:11:05.161 fused_ordering(950) 00:11:05.161 fused_ordering(951) 00:11:05.161 fused_ordering(952) 00:11:05.161 fused_ordering(953) 00:11:05.161 fused_ordering(954) 00:11:05.161 fused_ordering(955) 00:11:05.161 fused_ordering(956) 00:11:05.161 fused_ordering(957) 00:11:05.161 fused_ordering(958) 00:11:05.161 fused_ordering(959) 00:11:05.161 fused_ordering(960) 00:11:05.161 fused_ordering(961) 00:11:05.161 fused_ordering(962) 00:11:05.161 fused_ordering(963) 00:11:05.161 fused_ordering(964) 00:11:05.161 fused_ordering(965) 00:11:05.161 fused_ordering(966) 00:11:05.161 fused_ordering(967) 00:11:05.161 fused_ordering(968) 00:11:05.161 fused_ordering(969) 00:11:05.161 fused_ordering(970) 00:11:05.161 fused_ordering(971) 00:11:05.161 fused_ordering(972) 00:11:05.161 fused_ordering(973) 00:11:05.161 fused_ordering(974) 00:11:05.161 fused_ordering(975) 00:11:05.161 fused_ordering(976) 00:11:05.161 fused_ordering(977) 00:11:05.161 fused_ordering(978) 00:11:05.161 fused_ordering(979) 00:11:05.161 fused_ordering(980) 00:11:05.161 fused_ordering(981) 00:11:05.161 fused_ordering(982) 00:11:05.161 fused_ordering(983) 00:11:05.161 fused_ordering(984) 00:11:05.161 fused_ordering(985) 00:11:05.161 fused_ordering(986) 00:11:05.161 fused_ordering(987) 00:11:05.161 fused_ordering(988) 00:11:05.161 fused_ordering(989) 00:11:05.161 fused_ordering(990) 00:11:05.161 fused_ordering(991) 00:11:05.161 fused_ordering(992) 00:11:05.161 fused_ordering(993) 00:11:05.161 fused_ordering(994) 00:11:05.161 fused_ordering(995) 00:11:05.161 fused_ordering(996) 00:11:05.161 fused_ordering(997) 00:11:05.161 fused_ordering(998) 00:11:05.161 fused_ordering(999) 00:11:05.161 fused_ordering(1000) 00:11:05.161 fused_ordering(1001) 00:11:05.161 fused_ordering(1002) 00:11:05.161 fused_ordering(1003) 00:11:05.161 fused_ordering(1004) 00:11:05.161 fused_ordering(1005) 00:11:05.161 fused_ordering(1006) 00:11:05.161 fused_ordering(1007) 00:11:05.161 fused_ordering(1008) 00:11:05.161 fused_ordering(1009) 00:11:05.161 fused_ordering(1010) 00:11:05.161 fused_ordering(1011) 00:11:05.161 fused_ordering(1012) 00:11:05.161 fused_ordering(1013) 00:11:05.161 fused_ordering(1014) 00:11:05.161 fused_ordering(1015) 00:11:05.161 fused_ordering(1016) 00:11:05.161 fused_ordering(1017) 00:11:05.161 fused_ordering(1018) 00:11:05.161 fused_ordering(1019) 00:11:05.161 fused_ordering(1020) 00:11:05.161 fused_ordering(1021) 00:11:05.161 fused_ordering(1022) 00:11:05.161 fused_ordering(1023) 00:11:05.161 21:26:27 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:05.161 21:26:27 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:05.161 21:26:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:05.161 21:26:27 -- nvmf/common.sh@117 -- # sync 00:11:05.161 21:26:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:05.161 21:26:27 -- nvmf/common.sh@120 -- # set +e 00:11:05.161 21:26:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:05.161 21:26:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:05.161 rmmod nvme_tcp 00:11:05.161 rmmod nvme_fabrics 00:11:05.161 rmmod nvme_keyring 00:11:05.161 21:26:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:05.161 21:26:27 -- nvmf/common.sh@124 -- # set -e 00:11:05.161 21:26:27 -- nvmf/common.sh@125 -- # return 0 00:11:05.161 21:26:27 -- nvmf/common.sh@478 -- # '[' -n 2769229 ']' 00:11:05.161 21:26:27 -- nvmf/common.sh@479 -- # killprocess 2769229 00:11:05.161 21:26:27 -- common/autotest_common.sh@936 -- # '[' -z 2769229 ']' 00:11:05.161 21:26:27 -- common/autotest_common.sh@940 -- # kill -0 2769229 00:11:05.161 21:26:27 -- common/autotest_common.sh@941 -- # uname 00:11:05.161 21:26:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:05.161 21:26:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2769229 00:11:05.161 21:26:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:05.161 21:26:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:05.161 21:26:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2769229' 00:11:05.161 killing process with pid 2769229 00:11:05.161 21:26:27 -- common/autotest_common.sh@955 -- # kill 2769229 00:11:05.161 21:26:27 -- common/autotest_common.sh@960 -- # wait 2769229 00:11:05.419 21:26:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:05.419 21:26:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:05.419 21:26:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:05.419 21:26:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:05.419 21:26:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:05.419 21:26:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.419 21:26:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:05.419 21:26:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.380 21:26:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:07.380 00:11:07.380 real 0m14.422s 00:11:07.380 user 0m8.487s 00:11:07.380 sys 0m8.578s 00:11:07.380 21:26:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:07.380 21:26:30 -- common/autotest_common.sh@10 -- # set +x 00:11:07.380 ************************************ 00:11:07.380 END TEST nvmf_fused_ordering 00:11:07.380 ************************************ 00:11:07.380 21:26:30 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:07.380 21:26:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:07.380 21:26:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:07.380 21:26:30 -- common/autotest_common.sh@10 -- # set +x 00:11:07.639 ************************************ 00:11:07.639 START TEST nvmf_delete_subsystem 00:11:07.639 ************************************ 00:11:07.639 21:26:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:07.639 * Looking for test storage... 00:11:07.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.639 21:26:30 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.639 21:26:30 -- nvmf/common.sh@7 -- # uname -s 00:11:07.639 21:26:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.639 21:26:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.639 21:26:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.639 21:26:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.639 21:26:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.639 21:26:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.639 21:26:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.639 21:26:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.639 21:26:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.639 21:26:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.639 21:26:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:07.639 21:26:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:07.639 21:26:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.639 21:26:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.639 21:26:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.639 21:26:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.639 21:26:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.639 21:26:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.639 21:26:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.639 21:26:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.640 21:26:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.640 21:26:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.640 21:26:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.640 21:26:30 -- paths/export.sh@5 -- # export PATH 00:11:07.640 21:26:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.640 21:26:30 -- nvmf/common.sh@47 -- # : 0 00:11:07.640 21:26:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:07.640 21:26:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:07.640 21:26:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.640 21:26:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.640 21:26:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.640 21:26:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:07.640 21:26:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:07.640 21:26:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:07.899 21:26:30 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:07.899 21:26:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:07.899 21:26:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.899 21:26:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:07.899 21:26:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:07.899 21:26:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:07.899 21:26:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.899 21:26:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.899 21:26:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.899 21:26:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:07.899 21:26:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:07.899 21:26:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:07.899 21:26:30 -- common/autotest_common.sh@10 -- # set +x 00:11:14.464 21:26:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:14.464 21:26:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:14.464 21:26:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:14.464 21:26:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:14.464 21:26:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:14.464 21:26:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:14.464 21:26:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:14.464 21:26:37 -- nvmf/common.sh@295 -- # net_devs=() 00:11:14.464 21:26:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:14.464 21:26:37 -- nvmf/common.sh@296 -- # e810=() 00:11:14.464 21:26:37 -- nvmf/common.sh@296 -- # local -ga e810 00:11:14.464 21:26:37 -- nvmf/common.sh@297 -- # x722=() 00:11:14.464 21:26:37 -- nvmf/common.sh@297 -- # local -ga x722 00:11:14.464 21:26:37 -- nvmf/common.sh@298 -- # mlx=() 00:11:14.464 21:26:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:14.464 21:26:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.464 21:26:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.464 21:26:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.464 21:26:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.464 21:26:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.464 21:26:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.464 21:26:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.464 21:26:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.464 21:26:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.464 21:26:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.464 21:26:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.464 21:26:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:14.464 21:26:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:14.464 21:26:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:14.464 21:26:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:14.464 21:26:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:14.464 21:26:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:14.464 21:26:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.465 21:26:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:14.465 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:14.465 21:26:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:14.465 21:26:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:14.465 21:26:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.465 21:26:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.465 21:26:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:14.465 21:26:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.465 21:26:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:14.465 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:14.465 21:26:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:14.465 21:26:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:14.465 21:26:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.465 21:26:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.465 21:26:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:14.465 21:26:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:14.465 21:26:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:14.465 21:26:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:14.465 21:26:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.465 21:26:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.465 21:26:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:14.465 21:26:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.465 21:26:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:14.465 Found net devices under 0000:af:00.0: cvl_0_0 00:11:14.465 21:26:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.465 21:26:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.465 21:26:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.465 21:26:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:14.465 21:26:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.465 21:26:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:14.465 Found net devices under 0000:af:00.1: cvl_0_1 00:11:14.465 21:26:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.465 21:26:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:14.465 21:26:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:14.465 21:26:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:14.465 21:26:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:14.465 21:26:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:14.465 21:26:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.465 21:26:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.465 21:26:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.465 21:26:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:14.465 21:26:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.465 21:26:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.465 21:26:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:14.465 21:26:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.465 21:26:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.465 21:26:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:14.465 21:26:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:14.465 21:26:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.465 21:26:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.465 21:26:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.465 21:26:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.465 21:26:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:14.465 21:26:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.465 21:26:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.465 21:26:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.465 21:26:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:14.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:11:14.465 00:11:14.465 --- 10.0.0.2 ping statistics --- 00:11:14.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.465 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:11:14.465 21:26:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:11:14.465 00:11:14.465 --- 10.0.0.1 ping statistics --- 00:11:14.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.465 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:11:14.465 21:26:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.465 21:26:37 -- nvmf/common.sh@411 -- # return 0 00:11:14.465 21:26:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:14.465 21:26:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.465 21:26:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:14.465 21:26:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:14.465 21:26:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.465 21:26:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:14.465 21:26:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:14.724 21:26:37 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:14.724 21:26:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:14.724 21:26:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:14.724 21:26:37 -- common/autotest_common.sh@10 -- # set +x 00:11:14.724 21:26:37 -- nvmf/common.sh@470 -- # nvmfpid=2773981 00:11:14.724 21:26:37 -- nvmf/common.sh@471 -- # waitforlisten 2773981 00:11:14.724 21:26:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:14.724 21:26:37 -- common/autotest_common.sh@817 -- # '[' -z 2773981 ']' 00:11:14.724 21:26:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.724 21:26:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:14.724 21:26:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.724 21:26:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:14.724 21:26:37 -- common/autotest_common.sh@10 -- # set +x 00:11:14.724 [2024-04-24 21:26:37.438355] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:11:14.724 [2024-04-24 21:26:37.438403] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.724 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.724 [2024-04-24 21:26:37.511414] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:14.724 [2024-04-24 21:26:37.584494] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.724 [2024-04-24 21:26:37.584537] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.724 [2024-04-24 21:26:37.584547] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.724 [2024-04-24 21:26:37.584555] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.724 [2024-04-24 21:26:37.584562] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.724 [2024-04-24 21:26:37.584671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.724 [2024-04-24 21:26:37.584673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.659 21:26:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:15.659 21:26:38 -- common/autotest_common.sh@850 -- # return 0 00:11:15.659 21:26:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:15.659 21:26:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:15.659 21:26:38 -- common/autotest_common.sh@10 -- # set +x 00:11:15.659 21:26:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.659 21:26:38 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:15.659 21:26:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.659 21:26:38 -- common/autotest_common.sh@10 -- # set +x 00:11:15.659 [2024-04-24 21:26:38.273350] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.659 21:26:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.659 21:26:38 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:15.659 21:26:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.659 21:26:38 -- common/autotest_common.sh@10 -- # set +x 00:11:15.659 21:26:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.659 21:26:38 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.659 21:26:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.659 21:26:38 -- common/autotest_common.sh@10 -- # set +x 00:11:15.659 [2024-04-24 21:26:38.293577] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.659 21:26:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.659 21:26:38 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:15.659 21:26:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.659 21:26:38 -- common/autotest_common.sh@10 -- # set +x 00:11:15.659 NULL1 00:11:15.659 21:26:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.659 21:26:38 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:15.659 21:26:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.659 21:26:38 -- common/autotest_common.sh@10 -- # set +x 00:11:15.659 Delay0 00:11:15.659 21:26:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.659 21:26:38 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:15.659 21:26:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.659 21:26:38 -- common/autotest_common.sh@10 -- # set +x 00:11:15.659 21:26:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.659 21:26:38 -- target/delete_subsystem.sh@28 -- # perf_pid=2774122 00:11:15.659 21:26:38 -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:15.659 21:26:38 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:15.659 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.659 [2024-04-24 21:26:38.385306] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:17.559 21:26:40 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:17.559 21:26:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:17.559 21:26:40 -- common/autotest_common.sh@10 -- # set +x 00:11:17.817 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.818 Write completed with error (sct=0, sc=8) 00:11:17.818 Read completed with error (sct=0, sc=8) 00:11:17.818 starting I/O failed: -6 00:11:17.819 Write completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 Write completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Write completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Write completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 Write completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Write completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 Write completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Write completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Write completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Write completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Write completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 Write completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Write completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Write completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Read completed with error (sct=0, sc=8) 00:11:17.819 Write completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 Write completed with error (sct=0, sc=8) 00:11:17.819 starting I/O failed: -6 00:11:17.819 [2024-04-24 21:26:40.516657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5d00000c00 is same with the state(5) to be set 00:11:18.753 [2024-04-24 21:26:41.481802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf484a0 is same with the state(5) to be set 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 [2024-04-24 21:26:41.517682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5d0000c250 is same with the state(5) to be set 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 [2024-04-24 21:26:41.517869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf45b90 is same with the state(5) to be set 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 [2024-04-24 21:26:41.518236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47cc0 is same with the state(5) to be set 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Write completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 Read completed with error (sct=0, sc=8) 00:11:18.754 [2024-04-24 21:26:41.518407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47e50 is same with the state(5) to be set 00:11:18.754 [2024-04-24 21:26:41.519169] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf484a0 (9): Bad file descriptor 00:11:18.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:18.754 21:26:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.754 21:26:41 -- target/delete_subsystem.sh@34 -- # delay=0 00:11:18.755 21:26:41 -- target/delete_subsystem.sh@35 -- # kill -0 2774122 00:11:18.755 21:26:41 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:18.755 Initializing NVMe Controllers 00:11:18.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:18.755 Controller IO queue size 128, less than required. 00:11:18.755 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:18.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:18.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:18.755 Initialization complete. Launching workers. 00:11:18.755 ======================================================== 00:11:18.755 Latency(us) 00:11:18.755 Device Information : IOPS MiB/s Average min max 00:11:18.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 194.64 0.10 945905.61 580.69 1011012.67 00:11:18.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 178.26 0.09 846231.19 394.35 1012738.39 00:11:18.755 ======================================================== 00:11:18.755 Total : 372.90 0.18 898258.32 394.35 1012738.39 00:11:18.755 00:11:19.321 21:26:42 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:19.321 21:26:42 -- target/delete_subsystem.sh@35 -- # kill -0 2774122 00:11:19.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2774122) - No such process 00:11:19.321 21:26:42 -- target/delete_subsystem.sh@45 -- # NOT wait 2774122 00:11:19.321 21:26:42 -- common/autotest_common.sh@638 -- # local es=0 00:11:19.321 21:26:42 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 2774122 00:11:19.321 21:26:42 -- common/autotest_common.sh@626 -- # local arg=wait 00:11:19.321 21:26:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:19.321 21:26:42 -- common/autotest_common.sh@630 -- # type -t wait 00:11:19.321 21:26:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:19.321 21:26:42 -- common/autotest_common.sh@641 -- # wait 2774122 00:11:19.321 21:26:42 -- common/autotest_common.sh@641 -- # es=1 00:11:19.321 21:26:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:19.321 21:26:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:19.321 21:26:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:19.321 21:26:42 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:19.321 21:26:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:19.321 21:26:42 -- common/autotest_common.sh@10 -- # set +x 00:11:19.321 21:26:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.321 21:26:42 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.321 21:26:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:19.321 21:26:42 -- common/autotest_common.sh@10 -- # set +x 00:11:19.321 [2024-04-24 21:26:42.046195] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.321 21:26:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.321 21:26:42 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.321 21:26:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:19.321 21:26:42 -- common/autotest_common.sh@10 -- # set +x 00:11:19.321 21:26:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.321 21:26:42 -- target/delete_subsystem.sh@54 -- # perf_pid=2774809 00:11:19.321 21:26:42 -- target/delete_subsystem.sh@56 -- # delay=0 00:11:19.321 21:26:42 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:19.321 21:26:42 -- target/delete_subsystem.sh@57 -- # kill -0 2774809 00:11:19.321 21:26:42 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:19.321 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.321 [2024-04-24 21:26:42.117259] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:19.887 21:26:42 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:19.887 21:26:42 -- target/delete_subsystem.sh@57 -- # kill -0 2774809 00:11:19.887 21:26:42 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:20.454 21:26:43 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:20.454 21:26:43 -- target/delete_subsystem.sh@57 -- # kill -0 2774809 00:11:20.454 21:26:43 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:20.712 21:26:43 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:20.712 21:26:43 -- target/delete_subsystem.sh@57 -- # kill -0 2774809 00:11:20.712 21:26:43 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:21.278 21:26:44 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:21.278 21:26:44 -- target/delete_subsystem.sh@57 -- # kill -0 2774809 00:11:21.278 21:26:44 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:21.844 21:26:44 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:21.844 21:26:44 -- target/delete_subsystem.sh@57 -- # kill -0 2774809 00:11:21.844 21:26:44 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:22.410 21:26:45 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:22.410 21:26:45 -- target/delete_subsystem.sh@57 -- # kill -0 2774809 00:11:22.410 21:26:45 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:22.670 Initializing NVMe Controllers 00:11:22.670 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:22.670 Controller IO queue size 128, less than required. 00:11:22.670 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:22.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:22.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:22.670 Initialization complete. Launching workers. 00:11:22.670 ======================================================== 00:11:22.670 Latency(us) 00:11:22.670 Device Information : IOPS MiB/s Average min max 00:11:22.670 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003650.20 1000364.28 1011952.32 00:11:22.670 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005008.40 1000511.21 1012337.38 00:11:22.670 ======================================================== 00:11:22.670 Total : 256.00 0.12 1004329.30 1000364.28 1012337.38 00:11:22.670 00:11:22.928 21:26:45 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:22.928 21:26:45 -- target/delete_subsystem.sh@57 -- # kill -0 2774809 00:11:22.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2774809) - No such process 00:11:22.928 21:26:45 -- target/delete_subsystem.sh@67 -- # wait 2774809 00:11:22.928 21:26:45 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:22.928 21:26:45 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:22.928 21:26:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:22.928 21:26:45 -- nvmf/common.sh@117 -- # sync 00:11:22.928 21:26:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:22.928 21:26:45 -- nvmf/common.sh@120 -- # set +e 00:11:22.928 21:26:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:22.928 21:26:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:22.928 rmmod nvme_tcp 00:11:22.928 rmmod nvme_fabrics 00:11:22.928 rmmod nvme_keyring 00:11:22.928 21:26:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:22.928 21:26:45 -- nvmf/common.sh@124 -- # set -e 00:11:22.928 21:26:45 -- nvmf/common.sh@125 -- # return 0 00:11:22.928 21:26:45 -- nvmf/common.sh@478 -- # '[' -n 2773981 ']' 00:11:22.928 21:26:45 -- nvmf/common.sh@479 -- # killprocess 2773981 00:11:22.928 21:26:45 -- common/autotest_common.sh@936 -- # '[' -z 2773981 ']' 00:11:22.928 21:26:45 -- common/autotest_common.sh@940 -- # kill -0 2773981 00:11:22.928 21:26:45 -- common/autotest_common.sh@941 -- # uname 00:11:22.928 21:26:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:22.928 21:26:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2773981 00:11:22.928 21:26:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:22.928 21:26:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:22.928 21:26:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2773981' 00:11:22.928 killing process with pid 2773981 00:11:22.928 21:26:45 -- common/autotest_common.sh@955 -- # kill 2773981 00:11:22.928 21:26:45 -- common/autotest_common.sh@960 -- # wait 2773981 00:11:23.186 21:26:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:23.186 21:26:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:23.186 21:26:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:23.186 21:26:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:23.186 21:26:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:23.186 21:26:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.186 21:26:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:23.186 21:26:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.715 21:26:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:25.715 00:11:25.715 real 0m17.625s 00:11:25.715 user 0m29.858s 00:11:25.715 sys 0m7.013s 00:11:25.715 21:26:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:25.715 21:26:48 -- common/autotest_common.sh@10 -- # set +x 00:11:25.715 ************************************ 00:11:25.715 END TEST nvmf_delete_subsystem 00:11:25.715 ************************************ 00:11:25.715 21:26:48 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:25.715 21:26:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:25.715 21:26:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:25.715 21:26:48 -- common/autotest_common.sh@10 -- # set +x 00:11:25.715 ************************************ 00:11:25.715 START TEST nvmf_ns_masking 00:11:25.715 ************************************ 00:11:25.715 21:26:48 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:25.715 * Looking for test storage... 00:11:25.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.715 21:26:48 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.715 21:26:48 -- nvmf/common.sh@7 -- # uname -s 00:11:25.716 21:26:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.716 21:26:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.716 21:26:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.716 21:26:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.716 21:26:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.716 21:26:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.716 21:26:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.716 21:26:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.716 21:26:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.716 21:26:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.716 21:26:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:25.716 21:26:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:25.716 21:26:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.716 21:26:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.716 21:26:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:25.716 21:26:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.716 21:26:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:25.716 21:26:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.716 21:26:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.716 21:26:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.716 21:26:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.716 21:26:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.716 21:26:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.716 21:26:48 -- paths/export.sh@5 -- # export PATH 00:11:25.716 21:26:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.716 21:26:48 -- nvmf/common.sh@47 -- # : 0 00:11:25.716 21:26:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:25.716 21:26:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:25.716 21:26:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.716 21:26:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.716 21:26:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.716 21:26:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:25.716 21:26:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:25.716 21:26:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:25.716 21:26:48 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:25.716 21:26:48 -- target/ns_masking.sh@11 -- # loops=5 00:11:25.716 21:26:48 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:25.716 21:26:48 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:25.716 21:26:48 -- target/ns_masking.sh@15 -- # uuidgen 00:11:25.716 21:26:48 -- target/ns_masking.sh@15 -- # HOSTID=956539be-c807-48c0-a7d0-1c7a3d30191b 00:11:25.716 21:26:48 -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:25.716 21:26:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:25.716 21:26:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.716 21:26:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:25.716 21:26:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:25.716 21:26:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:25.716 21:26:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.716 21:26:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:25.716 21:26:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.716 21:26:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:25.716 21:26:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:25.716 21:26:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:25.716 21:26:48 -- common/autotest_common.sh@10 -- # set +x 00:11:32.279 21:26:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:32.279 21:26:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:32.279 21:26:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:32.279 21:26:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:32.279 21:26:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:32.279 21:26:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:32.279 21:26:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:32.279 21:26:55 -- nvmf/common.sh@295 -- # net_devs=() 00:11:32.279 21:26:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:32.279 21:26:55 -- nvmf/common.sh@296 -- # e810=() 00:11:32.279 21:26:55 -- nvmf/common.sh@296 -- # local -ga e810 00:11:32.279 21:26:55 -- nvmf/common.sh@297 -- # x722=() 00:11:32.279 21:26:55 -- nvmf/common.sh@297 -- # local -ga x722 00:11:32.279 21:26:55 -- nvmf/common.sh@298 -- # mlx=() 00:11:32.279 21:26:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:32.279 21:26:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.279 21:26:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.279 21:26:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.279 21:26:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.279 21:26:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.279 21:26:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.279 21:26:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.279 21:26:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.279 21:26:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.279 21:26:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.279 21:26:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.279 21:26:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:32.279 21:26:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:32.279 21:26:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:32.279 21:26:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:32.279 21:26:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:32.279 21:26:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:32.279 21:26:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:32.279 21:26:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:32.279 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:32.279 21:26:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:32.279 21:26:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:32.279 21:26:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.279 21:26:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.279 21:26:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:32.279 21:26:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:32.279 21:26:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:32.279 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:32.279 21:26:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:32.279 21:26:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:32.279 21:26:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.279 21:26:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.279 21:26:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:32.279 21:26:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:32.279 21:26:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:32.279 21:26:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:32.279 21:26:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:32.279 21:26:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.279 21:26:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:32.279 21:26:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.279 21:26:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:32.279 Found net devices under 0000:af:00.0: cvl_0_0 00:11:32.279 21:26:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.279 21:26:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:32.279 21:26:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.279 21:26:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:32.279 21:26:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.279 21:26:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:32.279 Found net devices under 0000:af:00.1: cvl_0_1 00:11:32.279 21:26:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.279 21:26:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:32.279 21:26:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:32.279 21:26:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:32.279 21:26:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:32.279 21:26:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:32.279 21:26:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.279 21:26:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.279 21:26:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:32.279 21:26:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:32.279 21:26:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:32.279 21:26:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:32.279 21:26:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:32.279 21:26:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:32.279 21:26:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.279 21:26:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:32.279 21:26:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:32.279 21:26:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:32.279 21:26:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:32.279 21:26:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:32.279 21:26:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:32.538 21:26:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:32.538 21:26:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:32.538 21:26:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:32.538 21:26:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:32.538 21:26:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:32.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:11:32.538 00:11:32.538 --- 10.0.0.2 ping statistics --- 00:11:32.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.538 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:11:32.538 21:26:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:32.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:11:32.538 00:11:32.538 --- 10.0.0.1 ping statistics --- 00:11:32.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.538 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:11:32.538 21:26:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.538 21:26:55 -- nvmf/common.sh@411 -- # return 0 00:11:32.538 21:26:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:32.538 21:26:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.538 21:26:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:32.538 21:26:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:32.538 21:26:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.538 21:26:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:32.538 21:26:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:32.538 21:26:55 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:32.538 21:26:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:32.538 21:26:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:32.538 21:26:55 -- common/autotest_common.sh@10 -- # set +x 00:11:32.538 21:26:55 -- nvmf/common.sh@470 -- # nvmfpid=2779108 00:11:32.538 21:26:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.538 21:26:55 -- nvmf/common.sh@471 -- # waitforlisten 2779108 00:11:32.538 21:26:55 -- common/autotest_common.sh@817 -- # '[' -z 2779108 ']' 00:11:32.538 21:26:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.538 21:26:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:32.538 21:26:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.538 21:26:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:32.538 21:26:55 -- common/autotest_common.sh@10 -- # set +x 00:11:32.538 [2024-04-24 21:26:55.424681] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:11:32.538 [2024-04-24 21:26:55.424728] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.797 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.797 [2024-04-24 21:26:55.500146] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.797 [2024-04-24 21:26:55.571464] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.797 [2024-04-24 21:26:55.571507] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.797 [2024-04-24 21:26:55.571516] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.797 [2024-04-24 21:26:55.571525] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.797 [2024-04-24 21:26:55.571547] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.797 [2024-04-24 21:26:55.571605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.797 [2024-04-24 21:26:55.571699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.797 [2024-04-24 21:26:55.571786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.797 [2024-04-24 21:26:55.571788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.364 21:26:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:33.364 21:26:56 -- common/autotest_common.sh@850 -- # return 0 00:11:33.364 21:26:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:33.364 21:26:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:33.364 21:26:56 -- common/autotest_common.sh@10 -- # set +x 00:11:33.622 21:26:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.622 21:26:56 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:33.622 [2024-04-24 21:26:56.421759] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.622 21:26:56 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:33.622 21:26:56 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:33.622 21:26:56 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:33.880 Malloc1 00:11:33.880 21:26:56 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:34.138 Malloc2 00:11:34.139 21:26:56 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:34.139 21:26:57 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:34.396 21:26:57 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.655 [2024-04-24 21:26:57.352919] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.655 21:26:57 -- target/ns_masking.sh@61 -- # connect 00:11:34.655 21:26:57 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 956539be-c807-48c0-a7d0-1c7a3d30191b -a 10.0.0.2 -s 4420 -i 4 00:11:34.655 21:26:57 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:34.655 21:26:57 -- common/autotest_common.sh@1184 -- # local i=0 00:11:34.655 21:26:57 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.655 21:26:57 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:34.655 21:26:57 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:37.181 21:26:59 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:37.181 21:26:59 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:37.181 21:26:59 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:37.181 21:26:59 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:37.181 21:26:59 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:37.181 21:26:59 -- common/autotest_common.sh@1194 -- # return 0 00:11:37.181 21:26:59 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:37.181 21:26:59 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:37.181 21:26:59 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:37.181 21:26:59 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:37.181 21:26:59 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:37.181 21:26:59 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:37.181 21:26:59 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:37.181 [ 0]:0x1 00:11:37.181 21:26:59 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:37.181 21:26:59 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:37.181 21:26:59 -- target/ns_masking.sh@40 -- # nguid=9198d0ee67bc49929242b93dd864b2d2 00:11:37.181 21:26:59 -- target/ns_masking.sh@41 -- # [[ 9198d0ee67bc49929242b93dd864b2d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:37.181 21:26:59 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:37.181 21:26:59 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:37.181 21:26:59 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:37.181 21:26:59 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:37.181 [ 0]:0x1 00:11:37.181 21:26:59 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:37.181 21:26:59 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:37.181 21:26:59 -- target/ns_masking.sh@40 -- # nguid=9198d0ee67bc49929242b93dd864b2d2 00:11:37.181 21:26:59 -- target/ns_masking.sh@41 -- # [[ 9198d0ee67bc49929242b93dd864b2d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:37.181 21:26:59 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:37.181 21:26:59 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:37.181 21:26:59 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:37.181 [ 1]:0x2 00:11:37.181 21:26:59 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:37.181 21:26:59 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:37.181 21:26:59 -- target/ns_masking.sh@40 -- # nguid=e605bdc0a9ae4e55a70974c583345115 00:11:37.181 21:26:59 -- target/ns_masking.sh@41 -- # [[ e605bdc0a9ae4e55a70974c583345115 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:37.181 21:26:59 -- target/ns_masking.sh@69 -- # disconnect 00:11:37.181 21:26:59 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:37.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.438 21:27:00 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.438 21:27:00 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:37.695 21:27:00 -- target/ns_masking.sh@77 -- # connect 1 00:11:37.695 21:27:00 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 956539be-c807-48c0-a7d0-1c7a3d30191b -a 10.0.0.2 -s 4420 -i 4 00:11:37.952 21:27:00 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:37.952 21:27:00 -- common/autotest_common.sh@1184 -- # local i=0 00:11:37.952 21:27:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.952 21:27:00 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:11:37.952 21:27:00 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:11:37.952 21:27:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:39.851 21:27:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:39.851 21:27:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:39.851 21:27:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.851 21:27:02 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:39.851 21:27:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.851 21:27:02 -- common/autotest_common.sh@1194 -- # return 0 00:11:39.851 21:27:02 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:39.851 21:27:02 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:39.851 21:27:02 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:39.851 21:27:02 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:39.851 21:27:02 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:39.851 21:27:02 -- common/autotest_common.sh@638 -- # local es=0 00:11:39.851 21:27:02 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:39.851 21:27:02 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:39.851 21:27:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:39.851 21:27:02 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:39.851 21:27:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:39.851 21:27:02 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:39.851 21:27:02 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:39.851 21:27:02 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:39.851 21:27:02 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:39.851 21:27:02 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:40.109 21:27:02 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:40.109 21:27:02 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.109 21:27:02 -- common/autotest_common.sh@641 -- # es=1 00:11:40.109 21:27:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:40.109 21:27:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:40.109 21:27:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:40.109 21:27:02 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:40.109 21:27:02 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:40.109 21:27:02 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:40.109 [ 0]:0x2 00:11:40.109 21:27:02 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:40.109 21:27:02 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:40.109 21:27:02 -- target/ns_masking.sh@40 -- # nguid=e605bdc0a9ae4e55a70974c583345115 00:11:40.109 21:27:02 -- target/ns_masking.sh@41 -- # [[ e605bdc0a9ae4e55a70974c583345115 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.109 21:27:02 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:40.377 21:27:03 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:40.377 21:27:03 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:40.377 21:27:03 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:40.377 [ 0]:0x1 00:11:40.377 21:27:03 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:40.377 21:27:03 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:40.377 21:27:03 -- target/ns_masking.sh@40 -- # nguid=9198d0ee67bc49929242b93dd864b2d2 00:11:40.377 21:27:03 -- target/ns_masking.sh@41 -- # [[ 9198d0ee67bc49929242b93dd864b2d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.377 21:27:03 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:40.377 21:27:03 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:40.377 21:27:03 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:40.377 [ 1]:0x2 00:11:40.377 21:27:03 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:40.377 21:27:03 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:40.636 21:27:03 -- target/ns_masking.sh@40 -- # nguid=e605bdc0a9ae4e55a70974c583345115 00:11:40.636 21:27:03 -- target/ns_masking.sh@41 -- # [[ e605bdc0a9ae4e55a70974c583345115 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.636 21:27:03 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:40.636 21:27:03 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:40.636 21:27:03 -- common/autotest_common.sh@638 -- # local es=0 00:11:40.636 21:27:03 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:40.636 21:27:03 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:40.636 21:27:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:40.636 21:27:03 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:40.636 21:27:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:40.636 21:27:03 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:40.636 21:27:03 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:40.636 21:27:03 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:40.636 21:27:03 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:40.636 21:27:03 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:40.636 21:27:03 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:40.636 21:27:03 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.636 21:27:03 -- common/autotest_common.sh@641 -- # es=1 00:11:40.636 21:27:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:40.636 21:27:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:40.636 21:27:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:40.636 21:27:03 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:40.636 21:27:03 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:40.636 21:27:03 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:40.894 [ 0]:0x2 00:11:40.894 21:27:03 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:40.894 21:27:03 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:40.894 21:27:03 -- target/ns_masking.sh@40 -- # nguid=e605bdc0a9ae4e55a70974c583345115 00:11:40.894 21:27:03 -- target/ns_masking.sh@41 -- # [[ e605bdc0a9ae4e55a70974c583345115 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.894 21:27:03 -- target/ns_masking.sh@91 -- # disconnect 00:11:40.894 21:27:03 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:40.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.894 21:27:03 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:41.152 21:27:03 -- target/ns_masking.sh@95 -- # connect 2 00:11:41.152 21:27:03 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 956539be-c807-48c0-a7d0-1c7a3d30191b -a 10.0.0.2 -s 4420 -i 4 00:11:41.152 21:27:03 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:41.152 21:27:03 -- common/autotest_common.sh@1184 -- # local i=0 00:11:41.152 21:27:03 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.152 21:27:03 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:11:41.152 21:27:03 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:11:41.152 21:27:03 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:43.050 21:27:05 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:43.050 21:27:05 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:43.308 21:27:05 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:43.308 21:27:05 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:11:43.308 21:27:05 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:43.308 21:27:05 -- common/autotest_common.sh@1194 -- # return 0 00:11:43.308 21:27:05 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:43.308 21:27:05 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:43.308 21:27:06 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:43.308 21:27:06 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:43.308 21:27:06 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:43.308 21:27:06 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.308 21:27:06 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:43.308 [ 0]:0x1 00:11:43.308 21:27:06 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:43.308 21:27:06 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.308 21:27:06 -- target/ns_masking.sh@40 -- # nguid=9198d0ee67bc49929242b93dd864b2d2 00:11:43.308 21:27:06 -- target/ns_masking.sh@41 -- # [[ 9198d0ee67bc49929242b93dd864b2d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.308 21:27:06 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:43.308 21:27:06 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.309 21:27:06 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:43.309 [ 1]:0x2 00:11:43.309 21:27:06 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:43.309 21:27:06 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.309 21:27:06 -- target/ns_masking.sh@40 -- # nguid=e605bdc0a9ae4e55a70974c583345115 00:11:43.309 21:27:06 -- target/ns_masking.sh@41 -- # [[ e605bdc0a9ae4e55a70974c583345115 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.309 21:27:06 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:43.567 21:27:06 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:43.567 21:27:06 -- common/autotest_common.sh@638 -- # local es=0 00:11:43.567 21:27:06 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:43.567 21:27:06 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:43.567 21:27:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.567 21:27:06 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:43.567 21:27:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.567 21:27:06 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:43.567 21:27:06 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.567 21:27:06 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:43.567 21:27:06 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:43.567 21:27:06 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.567 21:27:06 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:43.567 21:27:06 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.567 21:27:06 -- common/autotest_common.sh@641 -- # es=1 00:11:43.567 21:27:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:43.567 21:27:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:43.567 21:27:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:43.567 21:27:06 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:43.567 21:27:06 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.567 21:27:06 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:43.567 [ 0]:0x2 00:11:43.567 21:27:06 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:43.567 21:27:06 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.567 21:27:06 -- target/ns_masking.sh@40 -- # nguid=e605bdc0a9ae4e55a70974c583345115 00:11:43.567 21:27:06 -- target/ns_masking.sh@41 -- # [[ e605bdc0a9ae4e55a70974c583345115 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.567 21:27:06 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:43.567 21:27:06 -- common/autotest_common.sh@638 -- # local es=0 00:11:43.567 21:27:06 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:43.567 21:27:06 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:43.567 21:27:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.567 21:27:06 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:43.825 21:27:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.825 21:27:06 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:43.825 21:27:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.825 21:27:06 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:43.825 21:27:06 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:43.825 21:27:06 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:43.825 [2024-04-24 21:27:06.614977] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:43.825 request: 00:11:43.825 { 00:11:43.825 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:43.825 "nsid": 2, 00:11:43.825 "host": "nqn.2016-06.io.spdk:host1", 00:11:43.825 "method": "nvmf_ns_remove_host", 00:11:43.825 "req_id": 1 00:11:43.825 } 00:11:43.825 Got JSON-RPC error response 00:11:43.825 response: 00:11:43.825 { 00:11:43.825 "code": -32602, 00:11:43.825 "message": "Invalid parameters" 00:11:43.825 } 00:11:43.825 21:27:06 -- common/autotest_common.sh@641 -- # es=1 00:11:43.825 21:27:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:43.825 21:27:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:43.825 21:27:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:43.825 21:27:06 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:43.825 21:27:06 -- common/autotest_common.sh@638 -- # local es=0 00:11:43.825 21:27:06 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:43.825 21:27:06 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:43.825 21:27:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.825 21:27:06 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:43.825 21:27:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.825 21:27:06 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:43.825 21:27:06 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.825 21:27:06 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:43.825 21:27:06 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:43.825 21:27:06 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.825 21:27:06 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:43.825 21:27:06 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.083 21:27:06 -- common/autotest_common.sh@641 -- # es=1 00:11:44.083 21:27:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:44.083 21:27:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:44.083 21:27:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:44.083 21:27:06 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:44.083 21:27:06 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:44.083 21:27:06 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:44.083 [ 0]:0x2 00:11:44.083 21:27:06 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:44.083 21:27:06 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:44.083 21:27:06 -- target/ns_masking.sh@40 -- # nguid=e605bdc0a9ae4e55a70974c583345115 00:11:44.083 21:27:06 -- target/ns_masking.sh@41 -- # [[ e605bdc0a9ae4e55a70974c583345115 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.083 21:27:06 -- target/ns_masking.sh@108 -- # disconnect 00:11:44.083 21:27:06 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:44.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.083 21:27:06 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.341 21:27:07 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:44.341 21:27:07 -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:44.341 21:27:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:44.341 21:27:07 -- nvmf/common.sh@117 -- # sync 00:11:44.341 21:27:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:44.341 21:27:07 -- nvmf/common.sh@120 -- # set +e 00:11:44.341 21:27:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:44.341 21:27:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:44.341 rmmod nvme_tcp 00:11:44.341 rmmod nvme_fabrics 00:11:44.341 rmmod nvme_keyring 00:11:44.341 21:27:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:44.341 21:27:07 -- nvmf/common.sh@124 -- # set -e 00:11:44.341 21:27:07 -- nvmf/common.sh@125 -- # return 0 00:11:44.341 21:27:07 -- nvmf/common.sh@478 -- # '[' -n 2779108 ']' 00:11:44.341 21:27:07 -- nvmf/common.sh@479 -- # killprocess 2779108 00:11:44.341 21:27:07 -- common/autotest_common.sh@936 -- # '[' -z 2779108 ']' 00:11:44.341 21:27:07 -- common/autotest_common.sh@940 -- # kill -0 2779108 00:11:44.341 21:27:07 -- common/autotest_common.sh@941 -- # uname 00:11:44.341 21:27:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:44.341 21:27:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2779108 00:11:44.600 21:27:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:44.600 21:27:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:44.600 21:27:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2779108' 00:11:44.600 killing process with pid 2779108 00:11:44.600 21:27:07 -- common/autotest_common.sh@955 -- # kill 2779108 00:11:44.600 21:27:07 -- common/autotest_common.sh@960 -- # wait 2779108 00:11:44.859 21:27:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:44.859 21:27:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:44.859 21:27:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:44.859 21:27:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:44.859 21:27:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:44.859 21:27:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.859 21:27:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:44.859 21:27:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.762 21:27:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:46.762 00:11:46.762 real 0m21.356s 00:11:46.762 user 0m51.404s 00:11:46.762 sys 0m7.851s 00:11:46.762 21:27:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:46.762 21:27:09 -- common/autotest_common.sh@10 -- # set +x 00:11:46.762 ************************************ 00:11:46.762 END TEST nvmf_ns_masking 00:11:46.762 ************************************ 00:11:46.762 21:27:09 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:46.762 21:27:09 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:46.762 21:27:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:46.762 21:27:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:46.762 21:27:09 -- common/autotest_common.sh@10 -- # set +x 00:11:47.054 ************************************ 00:11:47.054 START TEST nvmf_nvme_cli 00:11:47.054 ************************************ 00:11:47.054 21:27:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:47.054 * Looking for test storage... 00:11:47.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.054 21:27:09 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:47.054 21:27:09 -- nvmf/common.sh@7 -- # uname -s 00:11:47.054 21:27:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.054 21:27:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.054 21:27:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.054 21:27:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.054 21:27:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.054 21:27:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.054 21:27:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.054 21:27:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.333 21:27:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.333 21:27:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.333 21:27:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:47.333 21:27:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:47.333 21:27:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.333 21:27:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.333 21:27:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:47.333 21:27:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.333 21:27:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:47.333 21:27:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.333 21:27:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.333 21:27:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.333 21:27:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.333 21:27:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.333 21:27:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.333 21:27:09 -- paths/export.sh@5 -- # export PATH 00:11:47.333 21:27:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.333 21:27:09 -- nvmf/common.sh@47 -- # : 0 00:11:47.333 21:27:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:47.333 21:27:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:47.333 21:27:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.333 21:27:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.333 21:27:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.333 21:27:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:47.333 21:27:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:47.333 21:27:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:47.333 21:27:09 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:47.333 21:27:09 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:47.333 21:27:09 -- target/nvme_cli.sh@14 -- # devs=() 00:11:47.333 21:27:09 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:47.333 21:27:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:47.333 21:27:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.333 21:27:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:47.333 21:27:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:47.333 21:27:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:47.333 21:27:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.333 21:27:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.333 21:27:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.333 21:27:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:47.333 21:27:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:47.333 21:27:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:47.333 21:27:09 -- common/autotest_common.sh@10 -- # set +x 00:11:53.892 21:27:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:53.892 21:27:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:53.892 21:27:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:53.892 21:27:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:53.892 21:27:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:53.892 21:27:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:53.892 21:27:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:53.892 21:27:16 -- nvmf/common.sh@295 -- # net_devs=() 00:11:53.892 21:27:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:53.892 21:27:16 -- nvmf/common.sh@296 -- # e810=() 00:11:53.892 21:27:16 -- nvmf/common.sh@296 -- # local -ga e810 00:11:53.892 21:27:16 -- nvmf/common.sh@297 -- # x722=() 00:11:53.892 21:27:16 -- nvmf/common.sh@297 -- # local -ga x722 00:11:53.892 21:27:16 -- nvmf/common.sh@298 -- # mlx=() 00:11:53.892 21:27:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:53.892 21:27:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.892 21:27:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.892 21:27:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.892 21:27:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.892 21:27:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.892 21:27:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.892 21:27:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.892 21:27:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.892 21:27:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.892 21:27:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.892 21:27:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.892 21:27:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:53.892 21:27:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:53.892 21:27:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:53.892 21:27:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:53.892 21:27:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:53.892 21:27:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:53.892 21:27:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:53.892 21:27:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:53.892 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:53.892 21:27:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:53.892 21:27:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:53.892 21:27:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.892 21:27:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.892 21:27:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:53.892 21:27:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:53.892 21:27:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:53.892 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:53.892 21:27:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:53.892 21:27:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:53.892 21:27:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.892 21:27:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.892 21:27:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:53.892 21:27:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:53.892 21:27:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:53.892 21:27:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:53.892 21:27:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:53.892 21:27:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.892 21:27:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:53.892 21:27:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.892 21:27:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:53.892 Found net devices under 0000:af:00.0: cvl_0_0 00:11:53.892 21:27:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.892 21:27:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:53.892 21:27:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.892 21:27:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:53.892 21:27:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.892 21:27:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:53.892 Found net devices under 0000:af:00.1: cvl_0_1 00:11:53.892 21:27:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.892 21:27:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:53.892 21:27:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:53.892 21:27:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:53.892 21:27:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:53.892 21:27:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:53.892 21:27:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.892 21:27:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.892 21:27:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.892 21:27:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:53.892 21:27:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.893 21:27:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.893 21:27:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:53.893 21:27:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.893 21:27:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.893 21:27:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:53.893 21:27:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:53.893 21:27:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.893 21:27:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:54.150 21:27:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:54.150 21:27:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:54.150 21:27:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:54.150 21:27:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:54.150 21:27:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:54.150 21:27:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:54.150 21:27:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:54.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:11:54.150 00:11:54.150 --- 10.0.0.2 ping statistics --- 00:11:54.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.150 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:11:54.150 21:27:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:54.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:11:54.151 00:11:54.151 --- 10.0.0.1 ping statistics --- 00:11:54.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.151 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:11:54.151 21:27:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.151 21:27:16 -- nvmf/common.sh@411 -- # return 0 00:11:54.151 21:27:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:54.151 21:27:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.151 21:27:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:54.151 21:27:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:54.151 21:27:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.151 21:27:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:54.151 21:27:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:54.151 21:27:17 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:54.151 21:27:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:54.151 21:27:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:54.151 21:27:17 -- common/autotest_common.sh@10 -- # set +x 00:11:54.151 21:27:17 -- nvmf/common.sh@470 -- # nvmfpid=2785617 00:11:54.151 21:27:17 -- nvmf/common.sh@471 -- # waitforlisten 2785617 00:11:54.151 21:27:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:54.151 21:27:17 -- common/autotest_common.sh@817 -- # '[' -z 2785617 ']' 00:11:54.151 21:27:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.151 21:27:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:54.151 21:27:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.151 21:27:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:54.151 21:27:17 -- common/autotest_common.sh@10 -- # set +x 00:11:54.409 [2024-04-24 21:27:17.071271] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:11:54.409 [2024-04-24 21:27:17.071321] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.409 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.409 [2024-04-24 21:27:17.145555] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:54.409 [2024-04-24 21:27:17.219301] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.409 [2024-04-24 21:27:17.219339] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.409 [2024-04-24 21:27:17.219348] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.409 [2024-04-24 21:27:17.219356] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.409 [2024-04-24 21:27:17.219380] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.409 [2024-04-24 21:27:17.219431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.409 [2024-04-24 21:27:17.219530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.409 [2024-04-24 21:27:17.219553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.409 [2024-04-24 21:27:17.219555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.344 21:27:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:55.344 21:27:17 -- common/autotest_common.sh@850 -- # return 0 00:11:55.344 21:27:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:55.344 21:27:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:55.344 21:27:17 -- common/autotest_common.sh@10 -- # set +x 00:11:55.344 21:27:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.344 21:27:17 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:55.344 21:27:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.344 21:27:17 -- common/autotest_common.sh@10 -- # set +x 00:11:55.344 [2024-04-24 21:27:17.928355] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:55.344 21:27:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.344 21:27:17 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:55.344 21:27:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.344 21:27:17 -- common/autotest_common.sh@10 -- # set +x 00:11:55.344 Malloc0 00:11:55.344 21:27:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.344 21:27:17 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:55.344 21:27:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.344 21:27:17 -- common/autotest_common.sh@10 -- # set +x 00:11:55.344 Malloc1 00:11:55.344 21:27:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.345 21:27:17 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:55.345 21:27:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.345 21:27:17 -- common/autotest_common.sh@10 -- # set +x 00:11:55.345 21:27:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.345 21:27:17 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:55.345 21:27:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.345 21:27:17 -- common/autotest_common.sh@10 -- # set +x 00:11:55.345 21:27:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.345 21:27:18 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:55.345 21:27:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.345 21:27:18 -- common/autotest_common.sh@10 -- # set +x 00:11:55.345 21:27:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.345 21:27:18 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:55.345 21:27:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.345 21:27:18 -- common/autotest_common.sh@10 -- # set +x 00:11:55.345 [2024-04-24 21:27:18.012764] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:55.345 21:27:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.345 21:27:18 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:55.345 21:27:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.345 21:27:18 -- common/autotest_common.sh@10 -- # set +x 00:11:55.345 21:27:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.345 21:27:18 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:11:55.345 00:11:55.345 Discovery Log Number of Records 2, Generation counter 2 00:11:55.345 =====Discovery Log Entry 0====== 00:11:55.345 trtype: tcp 00:11:55.345 adrfam: ipv4 00:11:55.345 subtype: current discovery subsystem 00:11:55.345 treq: not required 00:11:55.345 portid: 0 00:11:55.345 trsvcid: 4420 00:11:55.345 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:55.345 traddr: 10.0.0.2 00:11:55.345 eflags: explicit discovery connections, duplicate discovery information 00:11:55.345 sectype: none 00:11:55.345 =====Discovery Log Entry 1====== 00:11:55.345 trtype: tcp 00:11:55.345 adrfam: ipv4 00:11:55.345 subtype: nvme subsystem 00:11:55.345 treq: not required 00:11:55.345 portid: 0 00:11:55.345 trsvcid: 4420 00:11:55.345 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:55.345 traddr: 10.0.0.2 00:11:55.345 eflags: none 00:11:55.345 sectype: none 00:11:55.345 21:27:18 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:55.345 21:27:18 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:55.345 21:27:18 -- nvmf/common.sh@511 -- # local dev _ 00:11:55.345 21:27:18 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:55.345 21:27:18 -- nvmf/common.sh@510 -- # nvme list 00:11:55.345 21:27:18 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:55.345 21:27:18 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:55.345 21:27:18 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:55.345 21:27:18 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:55.345 21:27:18 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:55.345 21:27:18 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:56.718 21:27:19 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:56.718 21:27:19 -- common/autotest_common.sh@1184 -- # local i=0 00:11:56.718 21:27:19 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:56.718 21:27:19 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:11:56.718 21:27:19 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:11:56.718 21:27:19 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:59.249 21:27:21 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:59.249 21:27:21 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:59.249 21:27:21 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.249 21:27:21 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:11:59.249 21:27:21 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.249 21:27:21 -- common/autotest_common.sh@1194 -- # return 0 00:11:59.249 21:27:21 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:59.249 21:27:21 -- nvmf/common.sh@511 -- # local dev _ 00:11:59.249 21:27:21 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:59.249 21:27:21 -- nvmf/common.sh@510 -- # nvme list 00:11:59.249 21:27:21 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:59.249 21:27:21 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:59.249 21:27:21 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:59.249 21:27:21 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:59.249 21:27:21 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:59.249 21:27:21 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:11:59.249 21:27:21 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:59.249 21:27:21 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:59.249 21:27:21 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:11:59.249 21:27:21 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:59.249 21:27:21 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:59.249 /dev/nvme0n1 ]] 00:11:59.249 21:27:21 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:59.249 21:27:21 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:59.249 21:27:21 -- nvmf/common.sh@511 -- # local dev _ 00:11:59.249 21:27:21 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:59.249 21:27:21 -- nvmf/common.sh@510 -- # nvme list 00:11:59.249 21:27:21 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:11:59.249 21:27:21 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:59.249 21:27:21 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:11:59.249 21:27:21 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:59.249 21:27:21 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:59.249 21:27:21 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:11:59.249 21:27:21 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:59.249 21:27:21 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:59.249 21:27:21 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:11:59.249 21:27:21 -- nvmf/common.sh@513 -- # read -r dev _ 00:11:59.249 21:27:21 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:59.249 21:27:21 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.249 21:27:21 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.249 21:27:21 -- common/autotest_common.sh@1205 -- # local i=0 00:11:59.249 21:27:21 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:59.249 21:27:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.249 21:27:21 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:59.249 21:27:21 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.249 21:27:21 -- common/autotest_common.sh@1217 -- # return 0 00:11:59.249 21:27:21 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:59.249 21:27:21 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.249 21:27:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:59.249 21:27:21 -- common/autotest_common.sh@10 -- # set +x 00:11:59.249 21:27:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:59.249 21:27:21 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:59.249 21:27:21 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:59.249 21:27:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:59.249 21:27:21 -- nvmf/common.sh@117 -- # sync 00:11:59.249 21:27:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:59.249 21:27:21 -- nvmf/common.sh@120 -- # set +e 00:11:59.249 21:27:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:59.249 21:27:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:59.249 rmmod nvme_tcp 00:11:59.249 rmmod nvme_fabrics 00:11:59.249 rmmod nvme_keyring 00:11:59.249 21:27:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:59.249 21:27:21 -- nvmf/common.sh@124 -- # set -e 00:11:59.249 21:27:21 -- nvmf/common.sh@125 -- # return 0 00:11:59.249 21:27:21 -- nvmf/common.sh@478 -- # '[' -n 2785617 ']' 00:11:59.249 21:27:21 -- nvmf/common.sh@479 -- # killprocess 2785617 00:11:59.249 21:27:21 -- common/autotest_common.sh@936 -- # '[' -z 2785617 ']' 00:11:59.249 21:27:21 -- common/autotest_common.sh@940 -- # kill -0 2785617 00:11:59.249 21:27:21 -- common/autotest_common.sh@941 -- # uname 00:11:59.249 21:27:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:59.249 21:27:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2785617 00:11:59.249 21:27:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:59.249 21:27:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:59.249 21:27:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2785617' 00:11:59.249 killing process with pid 2785617 00:11:59.249 21:27:21 -- common/autotest_common.sh@955 -- # kill 2785617 00:11:59.249 21:27:21 -- common/autotest_common.sh@960 -- # wait 2785617 00:11:59.249 21:27:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:59.249 21:27:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:59.249 21:27:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:59.249 21:27:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:59.249 21:27:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:59.249 21:27:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.249 21:27:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:59.249 21:27:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.783 21:27:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:01.783 00:12:01.783 real 0m14.402s 00:12:01.783 user 0m21.262s 00:12:01.783 sys 0m6.204s 00:12:01.783 21:27:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:01.783 21:27:24 -- common/autotest_common.sh@10 -- # set +x 00:12:01.783 ************************************ 00:12:01.783 END TEST nvmf_nvme_cli 00:12:01.783 ************************************ 00:12:01.783 21:27:24 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:01.783 21:27:24 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:01.783 21:27:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:01.783 21:27:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:01.783 21:27:24 -- common/autotest_common.sh@10 -- # set +x 00:12:01.783 ************************************ 00:12:01.783 START TEST nvmf_vfio_user 00:12:01.783 ************************************ 00:12:01.783 21:27:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:01.783 * Looking for test storage... 00:12:01.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.783 21:27:24 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.783 21:27:24 -- nvmf/common.sh@7 -- # uname -s 00:12:01.783 21:27:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.783 21:27:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.783 21:27:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.783 21:27:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.783 21:27:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.783 21:27:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.783 21:27:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.783 21:27:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.783 21:27:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.783 21:27:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.783 21:27:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:01.783 21:27:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:01.783 21:27:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.783 21:27:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.783 21:27:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.783 21:27:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.783 21:27:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.783 21:27:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.783 21:27:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.783 21:27:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.783 21:27:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.783 21:27:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.783 21:27:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.783 21:27:24 -- paths/export.sh@5 -- # export PATH 00:12:01.783 21:27:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.783 21:27:24 -- nvmf/common.sh@47 -- # : 0 00:12:01.783 21:27:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:01.783 21:27:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:01.783 21:27:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.783 21:27:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.783 21:27:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.783 21:27:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:01.783 21:27:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:01.783 21:27:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:01.783 21:27:24 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:01.783 21:27:24 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:01.783 21:27:24 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:01.783 21:27:24 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.783 21:27:24 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:01.783 21:27:24 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:01.783 21:27:24 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:01.783 21:27:24 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:01.783 21:27:24 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:01.783 21:27:24 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:01.783 21:27:24 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2787086 00:12:01.783 21:27:24 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2787086' 00:12:01.783 Process pid: 2787086 00:12:01.783 21:27:24 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:01.783 21:27:24 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2787086 00:12:01.783 21:27:24 -- common/autotest_common.sh@817 -- # '[' -z 2787086 ']' 00:12:01.783 21:27:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.783 21:27:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:01.783 21:27:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.783 21:27:24 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:01.783 21:27:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:01.783 21:27:24 -- common/autotest_common.sh@10 -- # set +x 00:12:01.783 [2024-04-24 21:27:24.611348] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:12:01.783 [2024-04-24 21:27:24.611393] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.783 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.054 [2024-04-24 21:27:24.681775] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.054 [2024-04-24 21:27:24.755109] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.054 [2024-04-24 21:27:24.755145] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.054 [2024-04-24 21:27:24.755154] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.054 [2024-04-24 21:27:24.755162] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.054 [2024-04-24 21:27:24.755184] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.054 [2024-04-24 21:27:24.755229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.054 [2024-04-24 21:27:24.755319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.054 [2024-04-24 21:27:24.755409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.054 [2024-04-24 21:27:24.755411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.638 21:27:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:02.638 21:27:25 -- common/autotest_common.sh@850 -- # return 0 00:12:02.638 21:27:25 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:04.012 21:27:26 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:04.012 21:27:26 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:04.012 21:27:26 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:04.012 21:27:26 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:04.012 21:27:26 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:04.012 21:27:26 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:04.012 Malloc1 00:12:04.012 21:27:26 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:04.270 21:27:27 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:04.528 21:27:27 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:04.528 21:27:27 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:04.528 21:27:27 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:04.528 21:27:27 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:04.787 Malloc2 00:12:04.787 21:27:27 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:05.045 21:27:27 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:05.303 21:27:27 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:05.303 21:27:28 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:05.303 21:27:28 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:05.303 21:27:28 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:05.303 21:27:28 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:05.303 21:27:28 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:05.303 21:27:28 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:05.303 [2024-04-24 21:27:28.162011] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:12:05.303 [2024-04-24 21:27:28.162050] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2787721 ] 00:12:05.303 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.303 [2024-04-24 21:27:28.192335] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:05.563 [2024-04-24 21:27:28.194738] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:05.563 [2024-04-24 21:27:28.194758] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6178658000 00:12:05.563 [2024-04-24 21:27:28.195744] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:05.563 [2024-04-24 21:27:28.196740] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:05.563 [2024-04-24 21:27:28.197744] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:05.563 [2024-04-24 21:27:28.198750] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:05.563 [2024-04-24 21:27:28.199751] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:05.563 [2024-04-24 21:27:28.200760] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:05.563 [2024-04-24 21:27:28.201767] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:05.563 [2024-04-24 21:27:28.202768] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:05.563 [2024-04-24 21:27:28.203775] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:05.563 [2024-04-24 21:27:28.203789] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f617864d000 00:12:05.563 [2024-04-24 21:27:28.204684] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:05.563 [2024-04-24 21:27:28.217548] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:05.563 [2024-04-24 21:27:28.217574] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:05.563 [2024-04-24 21:27:28.222892] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:05.563 [2024-04-24 21:27:28.222935] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:05.563 [2024-04-24 21:27:28.223006] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:05.563 [2024-04-24 21:27:28.223029] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:05.563 [2024-04-24 21:27:28.223036] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:05.563 [2024-04-24 21:27:28.223888] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:05.563 [2024-04-24 21:27:28.223899] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:05.563 [2024-04-24 21:27:28.223908] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:05.563 [2024-04-24 21:27:28.224893] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:05.563 [2024-04-24 21:27:28.224902] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:05.563 [2024-04-24 21:27:28.224911] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:05.563 [2024-04-24 21:27:28.225901] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:05.563 [2024-04-24 21:27:28.225911] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:05.563 [2024-04-24 21:27:28.226906] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:05.563 [2024-04-24 21:27:28.226918] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:05.563 [2024-04-24 21:27:28.226924] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:05.563 [2024-04-24 21:27:28.226932] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:05.563 [2024-04-24 21:27:28.227039] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:05.563 [2024-04-24 21:27:28.227045] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:05.563 [2024-04-24 21:27:28.227052] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:05.563 [2024-04-24 21:27:28.227914] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:05.563 [2024-04-24 21:27:28.228916] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:05.563 [2024-04-24 21:27:28.229927] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:05.563 [2024-04-24 21:27:28.230928] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:05.563 [2024-04-24 21:27:28.230995] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:05.563 [2024-04-24 21:27:28.231944] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:05.563 [2024-04-24 21:27:28.231954] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:05.563 [2024-04-24 21:27:28.231960] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:05.563 [2024-04-24 21:27:28.231980] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:05.563 [2024-04-24 21:27:28.231994] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:05.563 [2024-04-24 21:27:28.232012] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:05.563 [2024-04-24 21:27:28.232019] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:05.563 [2024-04-24 21:27:28.232034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:05.563 [2024-04-24 21:27:28.232074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:05.563 [2024-04-24 21:27:28.232086] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:05.563 [2024-04-24 21:27:28.232092] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:05.563 [2024-04-24 21:27:28.232098] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:05.563 [2024-04-24 21:27:28.232104] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:05.563 [2024-04-24 21:27:28.232111] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:05.563 [2024-04-24 21:27:28.232117] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:05.563 [2024-04-24 21:27:28.232125] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:05.563 [2024-04-24 21:27:28.232134] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:05.563 [2024-04-24 21:27:28.232145] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:05.563 [2024-04-24 21:27:28.232160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:05.563 [2024-04-24 21:27:28.232173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.563 [2024-04-24 21:27:28.232182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.563 [2024-04-24 21:27:28.232192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.564 [2024-04-24 21:27:28.232201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.564 [2024-04-24 21:27:28.232207] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:05.564 [2024-04-24 21:27:28.232219] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:05.564 [2024-04-24 21:27:28.232228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:05.564 [2024-04-24 21:27:28.232239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:05.564 [2024-04-24 21:27:28.232246] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:05.564 [2024-04-24 21:27:28.232253] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:05.564 [2024-04-24 21:27:28.232263] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:05.564 [2024-04-24 21:27:28.232270] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:05.564 [2024-04-24 21:27:28.232280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:05.564 [2024-04-24 21:27:28.232296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:05.564 [2024-04-24 21:27:28.232336] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:05.564 [2024-04-24 21:27:28.232345] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:05.564 [2024-04-24 21:27:28.232354] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:05.564 [2024-04-24 21:27:28.232360] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:05.564 [2024-04-24 21:27:28.232367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:05.564 [2024-04-24 21:27:28.232383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:05.564 [2024-04-24 21:27:28.232395] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:05.564 [2024-04-24 21:27:28.232405] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:05.564 [2024-04-24 21:27:28.232415] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:05.564 [2024-04-24 21:27:28.232422] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:05.564 [2024-04-24 21:27:28.232428] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:05.564 [2024-04-24 21:27:28.232435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:05.564 [2024-04-24 21:27:28.232459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:05.564 [2024-04-24 21:27:28.232474] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:05.564 [2024-04-24 21:27:28.232483] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:05.564 [2024-04-24 21:27:28.232491] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:05.564 [2024-04-24 21:27:28.232496] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:05.564 [2024-04-24 21:27:28.232503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:05.564 [2024-04-24 21:27:28.232516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:05.564 [2024-04-24 21:27:28.232526] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:05.564 [2024-04-24 21:27:28.232534] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:05.564 [2024-04-24 21:27:28.232543] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:05.564 [2024-04-24 21:27:28.232551] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:05.564 [2024-04-24 21:27:28.232557] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:05.564 [2024-04-24 21:27:28.232563] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:05.564 [2024-04-24 21:27:28.232569] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:05.564 [2024-04-24 21:27:28.232576] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:05.564 [2024-04-24 21:27:28.232594] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:05.564 [2024-04-24 21:27:28.232605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:05.564 [2024-04-24 21:27:28.232618] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:05.564 [2024-04-24 21:27:28.232626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:05.564 [2024-04-24 21:27:28.232639] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:05.564 [2024-04-24 21:27:28.232648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:05.564 [2024-04-24 21:27:28.232661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:05.564 [2024-04-24 21:27:28.232669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:05.564 [2024-04-24 21:27:28.232681] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:05.564 [2024-04-24 21:27:28.232687] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:05.564 [2024-04-24 21:27:28.232692] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:05.564 [2024-04-24 21:27:28.232696] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:05.564 [2024-04-24 21:27:28.232703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:05.564 [2024-04-24 21:27:28.232711] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:05.564 [2024-04-24 21:27:28.232717] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:05.564 [2024-04-24 21:27:28.232724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:05.564 [2024-04-24 21:27:28.232732] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:05.564 [2024-04-24 21:27:28.232737] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:05.564 [2024-04-24 21:27:28.232744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:05.564 [2024-04-24 21:27:28.232752] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:05.564 [2024-04-24 21:27:28.232758] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:05.564 [2024-04-24 21:27:28.232765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:05.564 [2024-04-24 21:27:28.232773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:05.564 [2024-04-24 21:27:28.232787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:05.564 [2024-04-24 21:27:28.232798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:05.564 [2024-04-24 21:27:28.232807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:05.564 ===================================================== 00:12:05.564 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:05.564 ===================================================== 00:12:05.564 Controller Capabilities/Features 00:12:05.564 ================================ 00:12:05.564 Vendor ID: 4e58 00:12:05.564 Subsystem Vendor ID: 4e58 00:12:05.564 Serial Number: SPDK1 00:12:05.564 Model Number: SPDK bdev Controller 00:12:05.564 Firmware Version: 24.05 00:12:05.564 Recommended Arb Burst: 6 00:12:05.564 IEEE OUI Identifier: 8d 6b 50 00:12:05.564 Multi-path I/O 00:12:05.564 May have multiple subsystem ports: Yes 00:12:05.564 May have multiple controllers: Yes 00:12:05.564 Associated with SR-IOV VF: No 00:12:05.564 Max Data Transfer Size: 131072 00:12:05.564 Max Number of Namespaces: 32 00:12:05.564 Max Number of I/O Queues: 127 00:12:05.564 NVMe Specification Version (VS): 1.3 00:12:05.564 NVMe Specification Version (Identify): 1.3 00:12:05.564 Maximum Queue Entries: 256 00:12:05.564 Contiguous Queues Required: Yes 00:12:05.564 Arbitration Mechanisms Supported 00:12:05.564 Weighted Round Robin: Not Supported 00:12:05.564 Vendor Specific: Not Supported 00:12:05.564 Reset Timeout: 15000 ms 00:12:05.564 Doorbell Stride: 4 bytes 00:12:05.564 NVM Subsystem Reset: Not Supported 00:12:05.564 Command Sets Supported 00:12:05.564 NVM Command Set: Supported 00:12:05.564 Boot Partition: Not Supported 00:12:05.564 Memory Page Size Minimum: 4096 bytes 00:12:05.564 Memory Page Size Maximum: 4096 bytes 00:12:05.564 Persistent Memory Region: Not Supported 00:12:05.564 Optional Asynchronous Events Supported 00:12:05.564 Namespace Attribute Notices: Supported 00:12:05.564 Firmware Activation Notices: Not Supported 00:12:05.564 ANA Change Notices: Not Supported 00:12:05.564 PLE Aggregate Log Change Notices: Not Supported 00:12:05.564 LBA Status Info Alert Notices: Not Supported 00:12:05.565 EGE Aggregate Log Change Notices: Not Supported 00:12:05.565 Normal NVM Subsystem Shutdown event: Not Supported 00:12:05.565 Zone Descriptor Change Notices: Not Supported 00:12:05.565 Discovery Log Change Notices: Not Supported 00:12:05.565 Controller Attributes 00:12:05.565 128-bit Host Identifier: Supported 00:12:05.565 Non-Operational Permissive Mode: Not Supported 00:12:05.565 NVM Sets: Not Supported 00:12:05.565 Read Recovery Levels: Not Supported 00:12:05.565 Endurance Groups: Not Supported 00:12:05.565 Predictable Latency Mode: Not Supported 00:12:05.565 Traffic Based Keep ALive: Not Supported 00:12:05.565 Namespace Granularity: Not Supported 00:12:05.565 SQ Associations: Not Supported 00:12:05.565 UUID List: Not Supported 00:12:05.565 Multi-Domain Subsystem: Not Supported 00:12:05.565 Fixed Capacity Management: Not Supported 00:12:05.565 Variable Capacity Management: Not Supported 00:12:05.565 Delete Endurance Group: Not Supported 00:12:05.565 Delete NVM Set: Not Supported 00:12:05.565 Extended LBA Formats Supported: Not Supported 00:12:05.565 Flexible Data Placement Supported: Not Supported 00:12:05.565 00:12:05.565 Controller Memory Buffer Support 00:12:05.565 ================================ 00:12:05.565 Supported: No 00:12:05.565 00:12:05.565 Persistent Memory Region Support 00:12:05.565 ================================ 00:12:05.565 Supported: No 00:12:05.565 00:12:05.565 Admin Command Set Attributes 00:12:05.565 ============================ 00:12:05.565 Security Send/Receive: Not Supported 00:12:05.565 Format NVM: Not Supported 00:12:05.565 Firmware Activate/Download: Not Supported 00:12:05.565 Namespace Management: Not Supported 00:12:05.565 Device Self-Test: Not Supported 00:12:05.565 Directives: Not Supported 00:12:05.565 NVMe-MI: Not Supported 00:12:05.565 Virtualization Management: Not Supported 00:12:05.565 Doorbell Buffer Config: Not Supported 00:12:05.565 Get LBA Status Capability: Not Supported 00:12:05.565 Command & Feature Lockdown Capability: Not Supported 00:12:05.565 Abort Command Limit: 4 00:12:05.565 Async Event Request Limit: 4 00:12:05.565 Number of Firmware Slots: N/A 00:12:05.565 Firmware Slot 1 Read-Only: N/A 00:12:05.565 Firmware Activation Without Reset: N/A 00:12:05.565 Multiple Update Detection Support: N/A 00:12:05.565 Firmware Update Granularity: No Information Provided 00:12:05.565 Per-Namespace SMART Log: No 00:12:05.565 Asymmetric Namespace Access Log Page: Not Supported 00:12:05.565 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:05.565 Command Effects Log Page: Supported 00:12:05.565 Get Log Page Extended Data: Supported 00:12:05.565 Telemetry Log Pages: Not Supported 00:12:05.565 Persistent Event Log Pages: Not Supported 00:12:05.565 Supported Log Pages Log Page: May Support 00:12:05.565 Commands Supported & Effects Log Page: Not Supported 00:12:05.565 Feature Identifiers & Effects Log Page:May Support 00:12:05.565 NVMe-MI Commands & Effects Log Page: May Support 00:12:05.565 Data Area 4 for Telemetry Log: Not Supported 00:12:05.565 Error Log Page Entries Supported: 128 00:12:05.565 Keep Alive: Supported 00:12:05.565 Keep Alive Granularity: 10000 ms 00:12:05.565 00:12:05.565 NVM Command Set Attributes 00:12:05.565 ========================== 00:12:05.565 Submission Queue Entry Size 00:12:05.565 Max: 64 00:12:05.565 Min: 64 00:12:05.565 Completion Queue Entry Size 00:12:05.565 Max: 16 00:12:05.565 Min: 16 00:12:05.565 Number of Namespaces: 32 00:12:05.565 Compare Command: Supported 00:12:05.565 Write Uncorrectable Command: Not Supported 00:12:05.565 Dataset Management Command: Supported 00:12:05.565 Write Zeroes Command: Supported 00:12:05.565 Set Features Save Field: Not Supported 00:12:05.565 Reservations: Not Supported 00:12:05.565 Timestamp: Not Supported 00:12:05.565 Copy: Supported 00:12:05.565 Volatile Write Cache: Present 00:12:05.565 Atomic Write Unit (Normal): 1 00:12:05.565 Atomic Write Unit (PFail): 1 00:12:05.565 Atomic Compare & Write Unit: 1 00:12:05.565 Fused Compare & Write: Supported 00:12:05.565 Scatter-Gather List 00:12:05.565 SGL Command Set: Supported (Dword aligned) 00:12:05.565 SGL Keyed: Not Supported 00:12:05.565 SGL Bit Bucket Descriptor: Not Supported 00:12:05.565 SGL Metadata Pointer: Not Supported 00:12:05.565 Oversized SGL: Not Supported 00:12:05.565 SGL Metadata Address: Not Supported 00:12:05.565 SGL Offset: Not Supported 00:12:05.565 Transport SGL Data Block: Not Supported 00:12:05.565 Replay Protected Memory Block: Not Supported 00:12:05.565 00:12:05.565 Firmware Slot Information 00:12:05.565 ========================= 00:12:05.565 Active slot: 1 00:12:05.565 Slot 1 Firmware Revision: 24.05 00:12:05.565 00:12:05.565 00:12:05.565 Commands Supported and Effects 00:12:05.565 ============================== 00:12:05.565 Admin Commands 00:12:05.565 -------------- 00:12:05.565 Get Log Page (02h): Supported 00:12:05.565 Identify (06h): Supported 00:12:05.565 Abort (08h): Supported 00:12:05.565 Set Features (09h): Supported 00:12:05.565 Get Features (0Ah): Supported 00:12:05.565 Asynchronous Event Request (0Ch): Supported 00:12:05.565 Keep Alive (18h): Supported 00:12:05.565 I/O Commands 00:12:05.565 ------------ 00:12:05.565 Flush (00h): Supported LBA-Change 00:12:05.565 Write (01h): Supported LBA-Change 00:12:05.565 Read (02h): Supported 00:12:05.565 Compare (05h): Supported 00:12:05.565 Write Zeroes (08h): Supported LBA-Change 00:12:05.565 Dataset Management (09h): Supported LBA-Change 00:12:05.565 Copy (19h): Supported LBA-Change 00:12:05.565 Unknown (79h): Supported LBA-Change 00:12:05.565 Unknown (7Ah): Supported 00:12:05.565 00:12:05.565 Error Log 00:12:05.565 ========= 00:12:05.565 00:12:05.565 Arbitration 00:12:05.565 =========== 00:12:05.565 Arbitration Burst: 1 00:12:05.565 00:12:05.565 Power Management 00:12:05.565 ================ 00:12:05.565 Number of Power States: 1 00:12:05.565 Current Power State: Power State #0 00:12:05.565 Power State #0: 00:12:05.565 Max Power: 0.00 W 00:12:05.565 Non-Operational State: Operational 00:12:05.565 Entry Latency: Not Reported 00:12:05.565 Exit Latency: Not Reported 00:12:05.565 Relative Read Throughput: 0 00:12:05.565 Relative Read Latency: 0 00:12:05.565 Relative Write Throughput: 0 00:12:05.565 Relative Write Latency: 0 00:12:05.565 Idle Power: Not Reported 00:12:05.565 Active Power: Not Reported 00:12:05.565 Non-Operational Permissive Mode: Not Supported 00:12:05.565 00:12:05.565 Health Information 00:12:05.565 ================== 00:12:05.565 Critical Warnings: 00:12:05.565 Available Spare Space: OK 00:12:05.565 Temperature: OK 00:12:05.565 Device Reliability: OK 00:12:05.565 Read Only: No 00:12:05.565 Volatile Memory Backup: OK 00:12:05.565 Current Temperature: 0 Kelvin (-2[2024-04-24 21:27:28.232898] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:05.565 [2024-04-24 21:27:28.232910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:05.565 [2024-04-24 21:27:28.232937] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:05.565 [2024-04-24 21:27:28.232947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.565 [2024-04-24 21:27:28.232955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.565 [2024-04-24 21:27:28.232963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.565 [2024-04-24 21:27:28.232970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.565 [2024-04-24 21:27:28.233955] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:05.565 [2024-04-24 21:27:28.233967] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:05.565 [2024-04-24 21:27:28.234954] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:05.565 [2024-04-24 21:27:28.235003] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:05.565 [2024-04-24 21:27:28.235010] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:05.565 [2024-04-24 21:27:28.235966] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:05.565 [2024-04-24 21:27:28.235978] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:05.565 [2024-04-24 21:27:28.236028] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:05.565 [2024-04-24 21:27:28.236995] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:05.565 73 Celsius) 00:12:05.566 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:05.566 Available Spare: 0% 00:12:05.566 Available Spare Threshold: 0% 00:12:05.566 Life Percentage Used: 0% 00:12:05.566 Data Units Read: 0 00:12:05.566 Data Units Written: 0 00:12:05.566 Host Read Commands: 0 00:12:05.566 Host Write Commands: 0 00:12:05.566 Controller Busy Time: 0 minutes 00:12:05.566 Power Cycles: 0 00:12:05.566 Power On Hours: 0 hours 00:12:05.566 Unsafe Shutdowns: 0 00:12:05.566 Unrecoverable Media Errors: 0 00:12:05.566 Lifetime Error Log Entries: 0 00:12:05.566 Warning Temperature Time: 0 minutes 00:12:05.566 Critical Temperature Time: 0 minutes 00:12:05.566 00:12:05.566 Number of Queues 00:12:05.566 ================ 00:12:05.566 Number of I/O Submission Queues: 127 00:12:05.566 Number of I/O Completion Queues: 127 00:12:05.566 00:12:05.566 Active Namespaces 00:12:05.566 ================= 00:12:05.566 Namespace ID:1 00:12:05.566 Error Recovery Timeout: Unlimited 00:12:05.566 Command Set Identifier: NVM (00h) 00:12:05.566 Deallocate: Supported 00:12:05.566 Deallocated/Unwritten Error: Not Supported 00:12:05.566 Deallocated Read Value: Unknown 00:12:05.566 Deallocate in Write Zeroes: Not Supported 00:12:05.566 Deallocated Guard Field: 0xFFFF 00:12:05.566 Flush: Supported 00:12:05.566 Reservation: Supported 00:12:05.566 Namespace Sharing Capabilities: Multiple Controllers 00:12:05.566 Size (in LBAs): 131072 (0GiB) 00:12:05.566 Capacity (in LBAs): 131072 (0GiB) 00:12:05.566 Utilization (in LBAs): 131072 (0GiB) 00:12:05.566 NGUID: B66C3BB98ACE4059AC4C3B96C34F605E 00:12:05.566 UUID: b66c3bb9-8ace-4059-ac4c-3b96c34f605e 00:12:05.566 Thin Provisioning: Not Supported 00:12:05.566 Per-NS Atomic Units: Yes 00:12:05.566 Atomic Boundary Size (Normal): 0 00:12:05.566 Atomic Boundary Size (PFail): 0 00:12:05.566 Atomic Boundary Offset: 0 00:12:05.566 Maximum Single Source Range Length: 65535 00:12:05.566 Maximum Copy Length: 65535 00:12:05.566 Maximum Source Range Count: 1 00:12:05.566 NGUID/EUI64 Never Reused: No 00:12:05.566 Namespace Write Protected: No 00:12:05.566 Number of LBA Formats: 1 00:12:05.566 Current LBA Format: LBA Format #00 00:12:05.566 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:05.566 00:12:05.566 21:27:28 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:05.566 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.566 [2024-04-24 21:27:28.434178] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:10.835 [2024-04-24 21:27:33.455932] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:10.835 Initializing NVMe Controllers 00:12:10.835 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:10.835 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:10.835 Initialization complete. Launching workers. 00:12:10.835 ======================================================== 00:12:10.835 Latency(us) 00:12:10.835 Device Information : IOPS MiB/s Average min max 00:12:10.835 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39941.11 156.02 3204.92 923.11 6708.87 00:12:10.835 ======================================================== 00:12:10.835 Total : 39941.11 156.02 3204.92 923.11 6708.87 00:12:10.835 00:12:10.835 21:27:33 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:10.835 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.835 [2024-04-24 21:27:33.669908] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:16.102 [2024-04-24 21:27:38.707845] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:16.102 Initializing NVMe Controllers 00:12:16.102 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:16.102 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:16.102 Initialization complete. Launching workers. 00:12:16.102 ======================================================== 00:12:16.102 Latency(us) 00:12:16.102 Device Information : IOPS MiB/s Average min max 00:12:16.102 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15999.24 62.50 8011.54 5946.07 11973.07 00:12:16.102 ======================================================== 00:12:16.102 Total : 15999.24 62.50 8011.54 5946.07 11973.07 00:12:16.102 00:12:16.102 21:27:38 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:16.102 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.102 [2024-04-24 21:27:38.929861] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:21.373 [2024-04-24 21:27:43.993682] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:21.373 Initializing NVMe Controllers 00:12:21.373 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:21.373 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:21.373 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:21.373 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:21.373 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:21.373 Initialization complete. Launching workers. 00:12:21.373 Starting thread on core 2 00:12:21.373 Starting thread on core 3 00:12:21.373 Starting thread on core 1 00:12:21.373 21:27:44 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:21.373 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.632 [2024-04-24 21:27:44.291883] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:24.916 [2024-04-24 21:27:47.510655] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:24.916 Initializing NVMe Controllers 00:12:24.916 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:24.916 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:24.916 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:24.916 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:24.916 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:24.916 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:24.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:24.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:24.916 Initialization complete. Launching workers. 00:12:24.916 Starting thread on core 1 with urgent priority queue 00:12:24.916 Starting thread on core 2 with urgent priority queue 00:12:24.916 Starting thread on core 3 with urgent priority queue 00:12:24.916 Starting thread on core 0 with urgent priority queue 00:12:24.916 SPDK bdev Controller (SPDK1 ) core 0: 3357.67 IO/s 29.78 secs/100000 ios 00:12:24.916 SPDK bdev Controller (SPDK1 ) core 1: 2928.33 IO/s 34.15 secs/100000 ios 00:12:24.916 SPDK bdev Controller (SPDK1 ) core 2: 2397.67 IO/s 41.71 secs/100000 ios 00:12:24.916 SPDK bdev Controller (SPDK1 ) core 3: 3660.33 IO/s 27.32 secs/100000 ios 00:12:24.916 ======================================================== 00:12:24.916 00:12:24.916 21:27:47 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:24.916 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.916 [2024-04-24 21:27:47.797621] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:25.174 [2024-04-24 21:27:47.833006] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:25.174 Initializing NVMe Controllers 00:12:25.174 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:25.174 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:25.174 Namespace ID: 1 size: 0GB 00:12:25.174 Initialization complete. 00:12:25.174 INFO: using host memory buffer for IO 00:12:25.174 Hello world! 00:12:25.174 21:27:47 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:25.174 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.432 [2024-04-24 21:27:48.116879] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:26.366 Initializing NVMe Controllers 00:12:26.366 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:26.366 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:26.366 Initialization complete. Launching workers. 00:12:26.366 submit (in ns) avg, min, max = 7233.2, 2995.2, 4002440.8 00:12:26.366 complete (in ns) avg, min, max = 19035.6, 1644.0, 4994812.0 00:12:26.366 00:12:26.367 Submit histogram 00:12:26.367 ================ 00:12:26.367 Range in us Cumulative Count 00:12:26.367 2.995 - 3.008: 0.0175% ( 3) 00:12:26.367 3.008 - 3.021: 0.0584% ( 7) 00:12:26.367 3.021 - 3.034: 0.1051% ( 8) 00:12:26.367 3.034 - 3.046: 0.1985% ( 16) 00:12:26.367 3.046 - 3.059: 0.3036% ( 18) 00:12:26.367 3.059 - 3.072: 0.6831% ( 65) 00:12:26.367 3.072 - 3.085: 1.4714% ( 135) 00:12:26.367 3.085 - 3.098: 2.9135% ( 247) 00:12:26.367 3.098 - 3.110: 5.2432% ( 399) 00:12:26.367 3.110 - 3.123: 8.3727% ( 536) 00:12:26.367 3.123 - 3.136: 12.2672% ( 667) 00:12:26.367 3.136 - 3.149: 16.4827% ( 722) 00:12:26.367 3.149 - 3.162: 21.8836% ( 925) 00:12:26.367 3.162 - 3.174: 27.6055% ( 980) 00:12:26.367 3.174 - 3.187: 33.9814% ( 1092) 00:12:26.367 3.187 - 3.200: 41.0463% ( 1210) 00:12:26.367 3.200 - 3.213: 47.0777% ( 1033) 00:12:26.367 3.213 - 3.226: 50.9196% ( 658) 00:12:26.367 3.226 - 3.238: 54.2710% ( 574) 00:12:26.367 3.238 - 3.251: 58.1713% ( 668) 00:12:26.367 3.251 - 3.264: 61.8322% ( 627) 00:12:26.367 3.264 - 3.277: 64.9442% ( 533) 00:12:26.367 3.277 - 3.302: 70.8005% ( 1003) 00:12:26.367 3.302 - 3.328: 77.7486% ( 1190) 00:12:26.367 3.328 - 3.354: 84.3405% ( 1129) 00:12:26.367 3.354 - 3.379: 86.8628% ( 432) 00:12:26.367 3.379 - 3.405: 88.2525% ( 238) 00:12:26.367 3.405 - 3.430: 89.1400% ( 152) 00:12:26.367 3.430 - 3.456: 90.3953% ( 215) 00:12:26.367 3.456 - 3.482: 92.0068% ( 276) 00:12:26.367 3.482 - 3.507: 93.5774% ( 269) 00:12:26.367 3.507 - 3.533: 95.0196% ( 247) 00:12:26.367 3.533 - 3.558: 96.2223% ( 206) 00:12:26.367 3.558 - 3.584: 97.3317% ( 190) 00:12:26.367 3.584 - 3.610: 98.2893% ( 164) 00:12:26.367 3.610 - 3.635: 98.8906% ( 103) 00:12:26.367 3.635 - 3.661: 99.2118% ( 55) 00:12:26.367 3.661 - 3.686: 99.4570% ( 42) 00:12:26.367 3.686 - 3.712: 99.5271% ( 12) 00:12:26.367 3.712 - 3.738: 99.5796% ( 9) 00:12:26.367 3.738 - 3.763: 99.5971% ( 3) 00:12:26.367 3.763 - 3.789: 99.6088% ( 2) 00:12:26.367 3.789 - 3.814: 99.6146% ( 1) 00:12:26.367 3.968 - 3.994: 99.6205% ( 1) 00:12:26.367 4.147 - 4.173: 99.6263% ( 1) 00:12:26.367 5.453 - 5.478: 99.6322% ( 1) 00:12:26.367 5.606 - 5.632: 99.6380% ( 1) 00:12:26.367 5.658 - 5.683: 99.6438% ( 1) 00:12:26.367 5.683 - 5.709: 99.6497% ( 1) 00:12:26.367 5.914 - 5.939: 99.6555% ( 1) 00:12:26.367 6.067 - 6.093: 99.6614% ( 1) 00:12:26.367 6.170 - 6.195: 99.6672% ( 1) 00:12:26.367 6.272 - 6.298: 99.6789% ( 2) 00:12:26.367 6.477 - 6.502: 99.6847% ( 1) 00:12:26.367 6.810 - 6.861: 99.6905% ( 1) 00:12:26.367 6.963 - 7.014: 99.6964% ( 1) 00:12:26.367 7.014 - 7.066: 99.7022% ( 1) 00:12:26.367 7.117 - 7.168: 99.7139% ( 2) 00:12:26.367 7.219 - 7.270: 99.7256% ( 2) 00:12:26.367 7.322 - 7.373: 99.7314% ( 1) 00:12:26.367 7.373 - 7.424: 99.7373% ( 1) 00:12:26.367 7.424 - 7.475: 99.7489% ( 2) 00:12:26.367 7.475 - 7.526: 99.7548% ( 1) 00:12:26.367 7.578 - 7.629: 99.7665% ( 2) 00:12:26.367 7.629 - 7.680: 99.7723% ( 1) 00:12:26.367 7.680 - 7.731: 99.7840% ( 2) 00:12:26.367 7.834 - 7.885: 99.7898% ( 1) 00:12:26.367 7.885 - 7.936: 99.8015% ( 2) 00:12:26.367 7.936 - 7.987: 99.8073% ( 1) 00:12:26.367 7.987 - 8.038: 99.8132% ( 1) 00:12:26.367 8.038 - 8.090: 99.8190% ( 1) 00:12:26.367 8.090 - 8.141: 99.8248% ( 1) 00:12:26.367 8.141 - 8.192: 99.8365% ( 2) 00:12:26.367 8.243 - 8.294: 99.8482% ( 2) 00:12:26.367 8.294 - 8.346: 99.8599% ( 2) 00:12:26.367 8.448 - 8.499: 99.8657% ( 1) 00:12:26.367 8.602 - 8.653: 99.8715% ( 1) 00:12:26.367 8.858 - 8.909: 99.8774% ( 1) 00:12:26.367 9.216 - 9.267: 99.8832% ( 1) 00:12:26.367 [2024-04-24 21:27:49.137946] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:26.367 9.779 - 9.830: 99.8949% ( 2) 00:12:26.367 16.179 - 16.282: 99.9007% ( 1) 00:12:26.367 3984.589 - 4010.803: 100.0000% ( 17) 00:12:26.367 00:12:26.367 Complete histogram 00:12:26.367 ================== 00:12:26.367 Range in us Cumulative Count 00:12:26.367 1.638 - 1.651: 0.0876% ( 15) 00:12:26.367 1.651 - 1.664: 1.1853% ( 188) 00:12:26.367 1.664 - 1.677: 1.8217% ( 109) 00:12:26.367 1.677 - 1.690: 1.9968% ( 30) 00:12:26.367 1.690 - 1.702: 17.0900% ( 2585) 00:12:26.367 1.702 - 1.715: 70.1699% ( 9091) 00:12:26.367 1.715 - 1.728: 83.3421% ( 2256) 00:12:26.367 1.728 - 1.741: 86.4600% ( 534) 00:12:26.367 1.741 - 1.754: 91.7674% ( 909) 00:12:26.367 1.754 - 1.766: 95.6852% ( 671) 00:12:26.367 1.766 - 1.779: 97.3200% ( 280) 00:12:26.367 1.779 - 1.792: 98.4411% ( 192) 00:12:26.367 1.792 - 1.805: 98.9257% ( 83) 00:12:26.367 1.805 - 1.818: 99.0541% ( 22) 00:12:26.367 1.818 - 1.830: 99.1475% ( 16) 00:12:26.367 1.830 - 1.843: 99.2585% ( 19) 00:12:26.367 1.843 - 1.856: 99.2994% ( 7) 00:12:26.367 1.856 - 1.869: 99.3052% ( 1) 00:12:26.367 1.869 - 1.882: 99.3402% ( 6) 00:12:26.367 1.894 - 1.907: 99.3461% ( 1) 00:12:26.367 1.933 - 1.946: 99.3519% ( 1) 00:12:26.367 2.048 - 2.061: 99.3577% ( 1) 00:12:26.367 2.112 - 2.125: 99.3694% ( 2) 00:12:26.367 2.138 - 2.150: 99.3811% ( 2) 00:12:26.367 2.176 - 2.189: 99.3869% ( 1) 00:12:26.367 4.557 - 4.582: 99.3928% ( 1) 00:12:26.367 5.350 - 5.376: 99.3986% ( 1) 00:12:26.367 5.427 - 5.453: 99.4044% ( 1) 00:12:26.367 5.478 - 5.504: 99.4103% ( 1) 00:12:26.367 5.581 - 5.606: 99.4161% ( 1) 00:12:26.367 5.683 - 5.709: 99.4220% ( 1) 00:12:26.367 5.709 - 5.734: 99.4278% ( 1) 00:12:26.367 5.760 - 5.786: 99.4336% ( 1) 00:12:26.367 5.888 - 5.914: 99.4395% ( 1) 00:12:26.367 6.042 - 6.067: 99.4453% ( 1) 00:12:26.367 6.144 - 6.170: 99.4512% ( 1) 00:12:26.367 6.272 - 6.298: 99.4570% ( 1) 00:12:26.367 6.400 - 6.426: 99.4687% ( 2) 00:12:26.367 6.451 - 6.477: 99.4745% ( 1) 00:12:26.367 6.502 - 6.528: 99.4804% ( 1) 00:12:26.367 6.528 - 6.554: 99.4862% ( 1) 00:12:26.367 6.605 - 6.656: 99.4920% ( 1) 00:12:26.367 6.810 - 6.861: 99.4979% ( 1) 00:12:26.367 7.066 - 7.117: 99.5095% ( 2) 00:12:26.367 7.117 - 7.168: 99.5154% ( 1) 00:12:26.367 7.526 - 7.578: 99.5212% ( 1) 00:12:26.367 7.731 - 7.782: 99.5271% ( 1) 00:12:26.367 8.448 - 8.499: 99.5329% ( 1) 00:12:26.367 8.499 - 8.550: 99.5387% ( 1) 00:12:26.367 8.602 - 8.653: 99.5446% ( 1) 00:12:26.367 10.138 - 10.189: 99.5504% ( 1) 00:12:26.367 11.981 - 12.032: 99.5563% ( 1) 00:12:26.367 12.032 - 12.083: 99.5621% ( 1) 00:12:26.367 12.083 - 12.134: 99.5679% ( 1) 00:12:26.367 3774.874 - 3801.088: 99.5738% ( 1) 00:12:26.367 3984.589 - 4010.803: 99.9942% ( 72) 00:12:26.367 4980.736 - 5006.950: 100.0000% ( 1) 00:12:26.367 00:12:26.367 21:27:49 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:26.367 21:27:49 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:26.367 21:27:49 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:26.367 21:27:49 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:26.367 21:27:49 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:26.625 [2024-04-24 21:27:49.328727] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:12:26.625 [ 00:12:26.625 { 00:12:26.625 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:26.625 "subtype": "Discovery", 00:12:26.625 "listen_addresses": [], 00:12:26.625 "allow_any_host": true, 00:12:26.625 "hosts": [] 00:12:26.625 }, 00:12:26.625 { 00:12:26.625 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:26.625 "subtype": "NVMe", 00:12:26.625 "listen_addresses": [ 00:12:26.625 { 00:12:26.625 "transport": "VFIOUSER", 00:12:26.625 "trtype": "VFIOUSER", 00:12:26.625 "adrfam": "IPv4", 00:12:26.625 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:26.625 "trsvcid": "0" 00:12:26.625 } 00:12:26.625 ], 00:12:26.625 "allow_any_host": true, 00:12:26.625 "hosts": [], 00:12:26.625 "serial_number": "SPDK1", 00:12:26.625 "model_number": "SPDK bdev Controller", 00:12:26.625 "max_namespaces": 32, 00:12:26.625 "min_cntlid": 1, 00:12:26.625 "max_cntlid": 65519, 00:12:26.625 "namespaces": [ 00:12:26.625 { 00:12:26.625 "nsid": 1, 00:12:26.625 "bdev_name": "Malloc1", 00:12:26.625 "name": "Malloc1", 00:12:26.625 "nguid": "B66C3BB98ACE4059AC4C3B96C34F605E", 00:12:26.625 "uuid": "b66c3bb9-8ace-4059-ac4c-3b96c34f605e" 00:12:26.625 } 00:12:26.625 ] 00:12:26.625 }, 00:12:26.625 { 00:12:26.625 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:26.625 "subtype": "NVMe", 00:12:26.625 "listen_addresses": [ 00:12:26.625 { 00:12:26.625 "transport": "VFIOUSER", 00:12:26.625 "trtype": "VFIOUSER", 00:12:26.625 "adrfam": "IPv4", 00:12:26.625 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:26.625 "trsvcid": "0" 00:12:26.625 } 00:12:26.625 ], 00:12:26.625 "allow_any_host": true, 00:12:26.625 "hosts": [], 00:12:26.625 "serial_number": "SPDK2", 00:12:26.625 "model_number": "SPDK bdev Controller", 00:12:26.625 "max_namespaces": 32, 00:12:26.625 "min_cntlid": 1, 00:12:26.625 "max_cntlid": 65519, 00:12:26.625 "namespaces": [ 00:12:26.625 { 00:12:26.625 "nsid": 1, 00:12:26.625 "bdev_name": "Malloc2", 00:12:26.625 "name": "Malloc2", 00:12:26.625 "nguid": "CB3A964D56BF469BB2CF8CD5DF7AB53C", 00:12:26.625 "uuid": "cb3a964d-56bf-469b-b2cf-8cd5df7ab53c" 00:12:26.625 } 00:12:26.625 ] 00:12:26.625 } 00:12:26.625 ] 00:12:26.625 21:27:49 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:26.625 21:27:49 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2791368 00:12:26.625 21:27:49 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:26.625 21:27:49 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:26.625 21:27:49 -- common/autotest_common.sh@1251 -- # local i=0 00:12:26.625 21:27:49 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:26.625 21:27:49 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:26.625 21:27:49 -- common/autotest_common.sh@1262 -- # return 0 00:12:26.625 21:27:49 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:26.625 21:27:49 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:26.625 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.883 Malloc3 00:12:26.883 [2024-04-24 21:27:49.529851] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:26.883 21:27:49 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:26.883 [2024-04-24 21:27:49.710122] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:26.883 21:27:49 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:26.883 Asynchronous Event Request test 00:12:26.883 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:26.883 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:26.883 Registering asynchronous event callbacks... 00:12:26.883 Starting namespace attribute notice tests for all controllers... 00:12:26.883 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:26.883 aer_cb - Changed Namespace 00:12:26.883 Cleaning up... 00:12:27.142 [ 00:12:27.142 { 00:12:27.142 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:27.142 "subtype": "Discovery", 00:12:27.142 "listen_addresses": [], 00:12:27.142 "allow_any_host": true, 00:12:27.142 "hosts": [] 00:12:27.142 }, 00:12:27.142 { 00:12:27.142 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:27.142 "subtype": "NVMe", 00:12:27.142 "listen_addresses": [ 00:12:27.142 { 00:12:27.142 "transport": "VFIOUSER", 00:12:27.142 "trtype": "VFIOUSER", 00:12:27.142 "adrfam": "IPv4", 00:12:27.142 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:27.142 "trsvcid": "0" 00:12:27.142 } 00:12:27.142 ], 00:12:27.142 "allow_any_host": true, 00:12:27.142 "hosts": [], 00:12:27.142 "serial_number": "SPDK1", 00:12:27.142 "model_number": "SPDK bdev Controller", 00:12:27.142 "max_namespaces": 32, 00:12:27.142 "min_cntlid": 1, 00:12:27.142 "max_cntlid": 65519, 00:12:27.142 "namespaces": [ 00:12:27.142 { 00:12:27.142 "nsid": 1, 00:12:27.142 "bdev_name": "Malloc1", 00:12:27.142 "name": "Malloc1", 00:12:27.142 "nguid": "B66C3BB98ACE4059AC4C3B96C34F605E", 00:12:27.142 "uuid": "b66c3bb9-8ace-4059-ac4c-3b96c34f605e" 00:12:27.142 }, 00:12:27.142 { 00:12:27.142 "nsid": 2, 00:12:27.142 "bdev_name": "Malloc3", 00:12:27.142 "name": "Malloc3", 00:12:27.142 "nguid": "AA57F9DCB4A345CE8305C523EA16C702", 00:12:27.142 "uuid": "aa57f9dc-b4a3-45ce-8305-c523ea16c702" 00:12:27.142 } 00:12:27.142 ] 00:12:27.142 }, 00:12:27.142 { 00:12:27.142 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:27.142 "subtype": "NVMe", 00:12:27.142 "listen_addresses": [ 00:12:27.142 { 00:12:27.142 "transport": "VFIOUSER", 00:12:27.142 "trtype": "VFIOUSER", 00:12:27.142 "adrfam": "IPv4", 00:12:27.142 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:27.142 "trsvcid": "0" 00:12:27.142 } 00:12:27.142 ], 00:12:27.142 "allow_any_host": true, 00:12:27.142 "hosts": [], 00:12:27.142 "serial_number": "SPDK2", 00:12:27.142 "model_number": "SPDK bdev Controller", 00:12:27.142 "max_namespaces": 32, 00:12:27.142 "min_cntlid": 1, 00:12:27.142 "max_cntlid": 65519, 00:12:27.142 "namespaces": [ 00:12:27.142 { 00:12:27.142 "nsid": 1, 00:12:27.142 "bdev_name": "Malloc2", 00:12:27.142 "name": "Malloc2", 00:12:27.142 "nguid": "CB3A964D56BF469BB2CF8CD5DF7AB53C", 00:12:27.142 "uuid": "cb3a964d-56bf-469b-b2cf-8cd5df7ab53c" 00:12:27.142 } 00:12:27.142 ] 00:12:27.142 } 00:12:27.142 ] 00:12:27.142 21:27:49 -- target/nvmf_vfio_user.sh@44 -- # wait 2791368 00:12:27.142 21:27:49 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:27.142 21:27:49 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:27.142 21:27:49 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:27.142 21:27:49 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:27.142 [2024-04-24 21:27:49.937020] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:12:27.142 [2024-04-24 21:27:49.937073] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2791392 ] 00:12:27.142 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.142 [2024-04-24 21:27:49.969672] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:27.143 [2024-04-24 21:27:49.972321] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:27.143 [2024-04-24 21:27:49.972343] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f30cf9eb000 00:12:27.143 [2024-04-24 21:27:49.973322] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.143 [2024-04-24 21:27:49.974329] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.143 [2024-04-24 21:27:49.975336] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.143 [2024-04-24 21:27:49.976337] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:27.143 [2024-04-24 21:27:49.977349] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:27.143 [2024-04-24 21:27:49.978351] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.143 [2024-04-24 21:27:49.979357] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:27.143 [2024-04-24 21:27:49.980365] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:27.143 [2024-04-24 21:27:49.981377] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:27.143 [2024-04-24 21:27:49.981392] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f30cf9e0000 00:12:27.143 [2024-04-24 21:27:49.982286] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:27.143 [2024-04-24 21:27:49.990505] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:27.143 [2024-04-24 21:27:49.990525] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:27.143 [2024-04-24 21:27:49.995618] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:27.143 [2024-04-24 21:27:49.995655] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:27.143 [2024-04-24 21:27:49.995722] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:27.143 [2024-04-24 21:27:49.995741] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:27.143 [2024-04-24 21:27:49.995748] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:27.143 [2024-04-24 21:27:49.996618] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:27.143 [2024-04-24 21:27:49.996629] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:27.143 [2024-04-24 21:27:49.996643] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:27.143 [2024-04-24 21:27:49.997623] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:27.143 [2024-04-24 21:27:49.997633] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:27.143 [2024-04-24 21:27:49.997642] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:27.143 [2024-04-24 21:27:49.998632] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:27.143 [2024-04-24 21:27:49.998642] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:27.143 [2024-04-24 21:27:49.999636] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:27.143 [2024-04-24 21:27:49.999646] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:27.143 [2024-04-24 21:27:49.999653] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:27.143 [2024-04-24 21:27:49.999661] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:27.143 [2024-04-24 21:27:49.999768] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:27.143 [2024-04-24 21:27:49.999775] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:27.143 [2024-04-24 21:27:49.999781] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:27.143 [2024-04-24 21:27:50.000643] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:27.143 [2024-04-24 21:27:50.001649] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:27.143 [2024-04-24 21:27:50.002653] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:27.143 [2024-04-24 21:27:50.003655] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:27.143 [2024-04-24 21:27:50.003696] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:27.143 [2024-04-24 21:27:50.004665] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:27.143 [2024-04-24 21:27:50.004676] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:27.143 [2024-04-24 21:27:50.004682] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:27.143 [2024-04-24 21:27:50.004701] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:27.143 [2024-04-24 21:27:50.004715] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:27.143 [2024-04-24 21:27:50.004731] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:27.143 [2024-04-24 21:27:50.004738] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:27.143 [2024-04-24 21:27:50.004753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:27.143 [2024-04-24 21:27:50.013587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:27.143 [2024-04-24 21:27:50.013612] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:27.143 [2024-04-24 21:27:50.013620] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:27.143 [2024-04-24 21:27:50.013626] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:27.143 [2024-04-24 21:27:50.013634] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:27.143 [2024-04-24 21:27:50.013641] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:27.143 [2024-04-24 21:27:50.013648] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:27.143 [2024-04-24 21:27:50.013655] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:27.143 [2024-04-24 21:27:50.013666] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:27.143 [2024-04-24 21:27:50.013679] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:27.143 [2024-04-24 21:27:50.020460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:27.144 [2024-04-24 21:27:50.020479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.144 [2024-04-24 21:27:50.020489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.144 [2024-04-24 21:27:50.020498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.144 [2024-04-24 21:27:50.020508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:27.144 [2024-04-24 21:27:50.020515] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:27.144 [2024-04-24 21:27:50.020526] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:27.144 [2024-04-24 21:27:50.020536] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:27.144 [2024-04-24 21:27:50.028462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:27.144 [2024-04-24 21:27:50.028473] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:27.144 [2024-04-24 21:27:50.028480] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:27.144 [2024-04-24 21:27:50.028491] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:27.144 [2024-04-24 21:27:50.028500] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:27.144 [2024-04-24 21:27:50.028510] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:27.402 [2024-04-24 21:27:50.036459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:27.402 [2024-04-24 21:27:50.036507] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:27.402 [2024-04-24 21:27:50.036517] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:27.402 [2024-04-24 21:27:50.036527] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:27.402 [2024-04-24 21:27:50.036534] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:27.402 [2024-04-24 21:27:50.036542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:27.402 [2024-04-24 21:27:50.044462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:27.402 [2024-04-24 21:27:50.044480] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:27.402 [2024-04-24 21:27:50.044492] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:27.402 [2024-04-24 21:27:50.044502] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:27.402 [2024-04-24 21:27:50.044511] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:27.402 [2024-04-24 21:27:50.044517] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:27.402 [2024-04-24 21:27:50.044525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:27.402 [2024-04-24 21:27:50.052457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:27.402 [2024-04-24 21:27:50.052475] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:27.402 [2024-04-24 21:27:50.052485] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:27.402 [2024-04-24 21:27:50.052494] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:27.402 [2024-04-24 21:27:50.052500] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:27.402 [2024-04-24 21:27:50.052507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:27.402 [2024-04-24 21:27:50.060458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:27.402 [2024-04-24 21:27:50.060470] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:27.402 [2024-04-24 21:27:50.060479] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:27.402 [2024-04-24 21:27:50.060489] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:27.402 [2024-04-24 21:27:50.060496] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:27.402 [2024-04-24 21:27:50.060502] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:27.402 [2024-04-24 21:27:50.060511] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:27.402 [2024-04-24 21:27:50.060517] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:27.402 [2024-04-24 21:27:50.060524] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:27.402 [2024-04-24 21:27:50.060544] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:27.402 [2024-04-24 21:27:50.068461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:27.402 [2024-04-24 21:27:50.068477] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:27.402 [2024-04-24 21:27:50.076459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:27.402 [2024-04-24 21:27:50.076474] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:27.402 [2024-04-24 21:27:50.084472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:27.402 [2024-04-24 21:27:50.084510] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:27.402 [2024-04-24 21:27:50.092470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:27.402 [2024-04-24 21:27:50.092502] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:27.402 [2024-04-24 21:27:50.092514] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:27.402 [2024-04-24 21:27:50.092522] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:27.402 [2024-04-24 21:27:50.092529] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:27.402 [2024-04-24 21:27:50.092540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:27.402 [2024-04-24 21:27:50.092551] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:27.402 [2024-04-24 21:27:50.092559] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:27.402 [2024-04-24 21:27:50.092570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:27.402 [2024-04-24 21:27:50.092582] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:27.402 [2024-04-24 21:27:50.092590] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:27.402 [2024-04-24 21:27:50.092600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:27.402 [2024-04-24 21:27:50.092611] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:27.402 [2024-04-24 21:27:50.092619] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:27.403 [2024-04-24 21:27:50.092631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:27.403 [2024-04-24 21:27:50.100472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:27.403 [2024-04-24 21:27:50.100507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:27.403 [2024-04-24 21:27:50.100525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:27.403 [2024-04-24 21:27:50.100542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:27.403 ===================================================== 00:12:27.403 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:27.403 ===================================================== 00:12:27.403 Controller Capabilities/Features 00:12:27.403 ================================ 00:12:27.403 Vendor ID: 4e58 00:12:27.403 Subsystem Vendor ID: 4e58 00:12:27.403 Serial Number: SPDK2 00:12:27.403 Model Number: SPDK bdev Controller 00:12:27.403 Firmware Version: 24.05 00:12:27.403 Recommended Arb Burst: 6 00:12:27.403 IEEE OUI Identifier: 8d 6b 50 00:12:27.403 Multi-path I/O 00:12:27.403 May have multiple subsystem ports: Yes 00:12:27.403 May have multiple controllers: Yes 00:12:27.403 Associated with SR-IOV VF: No 00:12:27.403 Max Data Transfer Size: 131072 00:12:27.403 Max Number of Namespaces: 32 00:12:27.403 Max Number of I/O Queues: 127 00:12:27.403 NVMe Specification Version (VS): 1.3 00:12:27.403 NVMe Specification Version (Identify): 1.3 00:12:27.403 Maximum Queue Entries: 256 00:12:27.403 Contiguous Queues Required: Yes 00:12:27.403 Arbitration Mechanisms Supported 00:12:27.403 Weighted Round Robin: Not Supported 00:12:27.403 Vendor Specific: Not Supported 00:12:27.403 Reset Timeout: 15000 ms 00:12:27.403 Doorbell Stride: 4 bytes 00:12:27.403 NVM Subsystem Reset: Not Supported 00:12:27.403 Command Sets Supported 00:12:27.403 NVM Command Set: Supported 00:12:27.403 Boot Partition: Not Supported 00:12:27.403 Memory Page Size Minimum: 4096 bytes 00:12:27.403 Memory Page Size Maximum: 4096 bytes 00:12:27.403 Persistent Memory Region: Not Supported 00:12:27.403 Optional Asynchronous Events Supported 00:12:27.403 Namespace Attribute Notices: Supported 00:12:27.403 Firmware Activation Notices: Not Supported 00:12:27.403 ANA Change Notices: Not Supported 00:12:27.403 PLE Aggregate Log Change Notices: Not Supported 00:12:27.403 LBA Status Info Alert Notices: Not Supported 00:12:27.403 EGE Aggregate Log Change Notices: Not Supported 00:12:27.403 Normal NVM Subsystem Shutdown event: Not Supported 00:12:27.403 Zone Descriptor Change Notices: Not Supported 00:12:27.403 Discovery Log Change Notices: Not Supported 00:12:27.403 Controller Attributes 00:12:27.403 128-bit Host Identifier: Supported 00:12:27.403 Non-Operational Permissive Mode: Not Supported 00:12:27.403 NVM Sets: Not Supported 00:12:27.403 Read Recovery Levels: Not Supported 00:12:27.403 Endurance Groups: Not Supported 00:12:27.403 Predictable Latency Mode: Not Supported 00:12:27.403 Traffic Based Keep ALive: Not Supported 00:12:27.403 Namespace Granularity: Not Supported 00:12:27.403 SQ Associations: Not Supported 00:12:27.403 UUID List: Not Supported 00:12:27.403 Multi-Domain Subsystem: Not Supported 00:12:27.403 Fixed Capacity Management: Not Supported 00:12:27.403 Variable Capacity Management: Not Supported 00:12:27.403 Delete Endurance Group: Not Supported 00:12:27.403 Delete NVM Set: Not Supported 00:12:27.403 Extended LBA Formats Supported: Not Supported 00:12:27.403 Flexible Data Placement Supported: Not Supported 00:12:27.403 00:12:27.403 Controller Memory Buffer Support 00:12:27.403 ================================ 00:12:27.403 Supported: No 00:12:27.403 00:12:27.403 Persistent Memory Region Support 00:12:27.403 ================================ 00:12:27.403 Supported: No 00:12:27.403 00:12:27.403 Admin Command Set Attributes 00:12:27.403 ============================ 00:12:27.403 Security Send/Receive: Not Supported 00:12:27.403 Format NVM: Not Supported 00:12:27.403 Firmware Activate/Download: Not Supported 00:12:27.403 Namespace Management: Not Supported 00:12:27.403 Device Self-Test: Not Supported 00:12:27.403 Directives: Not Supported 00:12:27.403 NVMe-MI: Not Supported 00:12:27.403 Virtualization Management: Not Supported 00:12:27.403 Doorbell Buffer Config: Not Supported 00:12:27.403 Get LBA Status Capability: Not Supported 00:12:27.403 Command & Feature Lockdown Capability: Not Supported 00:12:27.403 Abort Command Limit: 4 00:12:27.403 Async Event Request Limit: 4 00:12:27.403 Number of Firmware Slots: N/A 00:12:27.403 Firmware Slot 1 Read-Only: N/A 00:12:27.403 Firmware Activation Without Reset: N/A 00:12:27.403 Multiple Update Detection Support: N/A 00:12:27.403 Firmware Update Granularity: No Information Provided 00:12:27.403 Per-Namespace SMART Log: No 00:12:27.403 Asymmetric Namespace Access Log Page: Not Supported 00:12:27.403 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:27.403 Command Effects Log Page: Supported 00:12:27.403 Get Log Page Extended Data: Supported 00:12:27.403 Telemetry Log Pages: Not Supported 00:12:27.403 Persistent Event Log Pages: Not Supported 00:12:27.403 Supported Log Pages Log Page: May Support 00:12:27.403 Commands Supported & Effects Log Page: Not Supported 00:12:27.403 Feature Identifiers & Effects Log Page:May Support 00:12:27.403 NVMe-MI Commands & Effects Log Page: May Support 00:12:27.403 Data Area 4 for Telemetry Log: Not Supported 00:12:27.403 Error Log Page Entries Supported: 128 00:12:27.403 Keep Alive: Supported 00:12:27.403 Keep Alive Granularity: 10000 ms 00:12:27.403 00:12:27.403 NVM Command Set Attributes 00:12:27.403 ========================== 00:12:27.403 Submission Queue Entry Size 00:12:27.403 Max: 64 00:12:27.403 Min: 64 00:12:27.403 Completion Queue Entry Size 00:12:27.403 Max: 16 00:12:27.403 Min: 16 00:12:27.403 Number of Namespaces: 32 00:12:27.403 Compare Command: Supported 00:12:27.403 Write Uncorrectable Command: Not Supported 00:12:27.403 Dataset Management Command: Supported 00:12:27.403 Write Zeroes Command: Supported 00:12:27.403 Set Features Save Field: Not Supported 00:12:27.403 Reservations: Not Supported 00:12:27.403 Timestamp: Not Supported 00:12:27.403 Copy: Supported 00:12:27.403 Volatile Write Cache: Present 00:12:27.403 Atomic Write Unit (Normal): 1 00:12:27.403 Atomic Write Unit (PFail): 1 00:12:27.403 Atomic Compare & Write Unit: 1 00:12:27.403 Fused Compare & Write: Supported 00:12:27.403 Scatter-Gather List 00:12:27.403 SGL Command Set: Supported (Dword aligned) 00:12:27.403 SGL Keyed: Not Supported 00:12:27.403 SGL Bit Bucket Descriptor: Not Supported 00:12:27.403 SGL Metadata Pointer: Not Supported 00:12:27.403 Oversized SGL: Not Supported 00:12:27.403 SGL Metadata Address: Not Supported 00:12:27.403 SGL Offset: Not Supported 00:12:27.403 Transport SGL Data Block: Not Supported 00:12:27.403 Replay Protected Memory Block: Not Supported 00:12:27.403 00:12:27.403 Firmware Slot Information 00:12:27.403 ========================= 00:12:27.403 Active slot: 1 00:12:27.403 Slot 1 Firmware Revision: 24.05 00:12:27.403 00:12:27.403 00:12:27.403 Commands Supported and Effects 00:12:27.403 ============================== 00:12:27.403 Admin Commands 00:12:27.403 -------------- 00:12:27.403 Get Log Page (02h): Supported 00:12:27.403 Identify (06h): Supported 00:12:27.403 Abort (08h): Supported 00:12:27.403 Set Features (09h): Supported 00:12:27.403 Get Features (0Ah): Supported 00:12:27.403 Asynchronous Event Request (0Ch): Supported 00:12:27.403 Keep Alive (18h): Supported 00:12:27.403 I/O Commands 00:12:27.403 ------------ 00:12:27.403 Flush (00h): Supported LBA-Change 00:12:27.403 Write (01h): Supported LBA-Change 00:12:27.403 Read (02h): Supported 00:12:27.403 Compare (05h): Supported 00:12:27.403 Write Zeroes (08h): Supported LBA-Change 00:12:27.403 Dataset Management (09h): Supported LBA-Change 00:12:27.403 Copy (19h): Supported LBA-Change 00:12:27.403 Unknown (79h): Supported LBA-Change 00:12:27.403 Unknown (7Ah): Supported 00:12:27.403 00:12:27.403 Error Log 00:12:27.403 ========= 00:12:27.403 00:12:27.403 Arbitration 00:12:27.403 =========== 00:12:27.403 Arbitration Burst: 1 00:12:27.403 00:12:27.403 Power Management 00:12:27.403 ================ 00:12:27.403 Number of Power States: 1 00:12:27.403 Current Power State: Power State #0 00:12:27.403 Power State #0: 00:12:27.403 Max Power: 0.00 W 00:12:27.403 Non-Operational State: Operational 00:12:27.403 Entry Latency: Not Reported 00:12:27.403 Exit Latency: Not Reported 00:12:27.403 Relative Read Throughput: 0 00:12:27.403 Relative Read Latency: 0 00:12:27.403 Relative Write Throughput: 0 00:12:27.403 Relative Write Latency: 0 00:12:27.403 Idle Power: Not Reported 00:12:27.403 Active Power: Not Reported 00:12:27.403 Non-Operational Permissive Mode: Not Supported 00:12:27.403 00:12:27.403 Health Information 00:12:27.403 ================== 00:12:27.403 Critical Warnings: 00:12:27.403 Available Spare Space: OK 00:12:27.403 Temperature: OK 00:12:27.403 Device Reliability: OK 00:12:27.403 Read Only: No 00:12:27.403 Volatile Memory Backup: OK 00:12:27.403 Current Temperature: 0 Kelvin (-2[2024-04-24 21:27:50.100685] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:27.403 [2024-04-24 21:27:50.108472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:27.403 [2024-04-24 21:27:50.108530] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:27.403 [2024-04-24 21:27:50.108548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.403 [2024-04-24 21:27:50.108561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.403 [2024-04-24 21:27:50.108573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.403 [2024-04-24 21:27:50.108585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:27.403 [2024-04-24 21:27:50.108675] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:27.403 [2024-04-24 21:27:50.108694] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:27.403 [2024-04-24 21:27:50.109677] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:27.403 [2024-04-24 21:27:50.109753] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:27.403 [2024-04-24 21:27:50.109767] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:27.403 [2024-04-24 21:27:50.110681] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:27.403 [2024-04-24 21:27:50.110706] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:27.403 [2024-04-24 21:27:50.110857] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:27.403 [2024-04-24 21:27:50.113467] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:27.403 73 Celsius) 00:12:27.403 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:27.403 Available Spare: 0% 00:12:27.403 Available Spare Threshold: 0% 00:12:27.403 Life Percentage Used: 0% 00:12:27.403 Data Units Read: 0 00:12:27.403 Data Units Written: 0 00:12:27.403 Host Read Commands: 0 00:12:27.403 Host Write Commands: 0 00:12:27.403 Controller Busy Time: 0 minutes 00:12:27.403 Power Cycles: 0 00:12:27.403 Power On Hours: 0 hours 00:12:27.403 Unsafe Shutdowns: 0 00:12:27.403 Unrecoverable Media Errors: 0 00:12:27.403 Lifetime Error Log Entries: 0 00:12:27.403 Warning Temperature Time: 0 minutes 00:12:27.403 Critical Temperature Time: 0 minutes 00:12:27.403 00:12:27.403 Number of Queues 00:12:27.403 ================ 00:12:27.403 Number of I/O Submission Queues: 127 00:12:27.403 Number of I/O Completion Queues: 127 00:12:27.403 00:12:27.403 Active Namespaces 00:12:27.403 ================= 00:12:27.403 Namespace ID:1 00:12:27.403 Error Recovery Timeout: Unlimited 00:12:27.403 Command Set Identifier: NVM (00h) 00:12:27.403 Deallocate: Supported 00:12:27.403 Deallocated/Unwritten Error: Not Supported 00:12:27.403 Deallocated Read Value: Unknown 00:12:27.403 Deallocate in Write Zeroes: Not Supported 00:12:27.403 Deallocated Guard Field: 0xFFFF 00:12:27.403 Flush: Supported 00:12:27.403 Reservation: Supported 00:12:27.403 Namespace Sharing Capabilities: Multiple Controllers 00:12:27.403 Size (in LBAs): 131072 (0GiB) 00:12:27.403 Capacity (in LBAs): 131072 (0GiB) 00:12:27.403 Utilization (in LBAs): 131072 (0GiB) 00:12:27.403 NGUID: CB3A964D56BF469BB2CF8CD5DF7AB53C 00:12:27.403 UUID: cb3a964d-56bf-469b-b2cf-8cd5df7ab53c 00:12:27.403 Thin Provisioning: Not Supported 00:12:27.403 Per-NS Atomic Units: Yes 00:12:27.403 Atomic Boundary Size (Normal): 0 00:12:27.403 Atomic Boundary Size (PFail): 0 00:12:27.403 Atomic Boundary Offset: 0 00:12:27.403 Maximum Single Source Range Length: 65535 00:12:27.403 Maximum Copy Length: 65535 00:12:27.403 Maximum Source Range Count: 1 00:12:27.403 NGUID/EUI64 Never Reused: No 00:12:27.403 Namespace Write Protected: No 00:12:27.403 Number of LBA Formats: 1 00:12:27.403 Current LBA Format: LBA Format #00 00:12:27.403 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:27.403 00:12:27.403 21:27:50 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:27.403 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.661 [2024-04-24 21:27:50.329647] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:32.950 [2024-04-24 21:27:55.437696] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:32.950 Initializing NVMe Controllers 00:12:32.950 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:32.950 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:32.950 Initialization complete. Launching workers. 00:12:32.950 ======================================================== 00:12:32.950 Latency(us) 00:12:32.950 Device Information : IOPS MiB/s Average min max 00:12:32.950 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39953.80 156.07 3203.52 926.69 6720.81 00:12:32.950 ======================================================== 00:12:32.950 Total : 39953.80 156.07 3203.52 926.69 6720.81 00:12:32.950 00:12:32.950 21:27:55 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:32.950 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.950 [2024-04-24 21:27:55.648306] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:38.266 [2024-04-24 21:28:00.669022] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:38.266 Initializing NVMe Controllers 00:12:38.266 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:38.266 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:38.266 Initialization complete. Launching workers. 00:12:38.266 ======================================================== 00:12:38.266 Latency(us) 00:12:38.266 Device Information : IOPS MiB/s Average min max 00:12:38.266 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39887.18 155.81 3209.15 916.52 9058.42 00:12:38.266 ======================================================== 00:12:38.266 Total : 39887.18 155.81 3209.15 916.52 9058.42 00:12:38.266 00:12:38.266 21:28:00 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:38.266 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.266 [2024-04-24 21:28:00.879074] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:43.545 [2024-04-24 21:28:06.013565] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:43.545 Initializing NVMe Controllers 00:12:43.545 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:43.545 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:43.545 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:43.545 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:43.545 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:43.545 Initialization complete. Launching workers. 00:12:43.545 Starting thread on core 2 00:12:43.545 Starting thread on core 3 00:12:43.545 Starting thread on core 1 00:12:43.545 21:28:06 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:43.545 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.545 [2024-04-24 21:28:06.300345] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:46.840 [2024-04-24 21:28:09.358030] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:46.840 Initializing NVMe Controllers 00:12:46.840 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:46.840 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:46.840 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:46.840 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:46.840 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:46.840 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:46.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:46.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:46.840 Initialization complete. Launching workers. 00:12:46.840 Starting thread on core 1 with urgent priority queue 00:12:46.840 Starting thread on core 2 with urgent priority queue 00:12:46.840 Starting thread on core 3 with urgent priority queue 00:12:46.840 Starting thread on core 0 with urgent priority queue 00:12:46.840 SPDK bdev Controller (SPDK2 ) core 0: 8277.33 IO/s 12.08 secs/100000 ios 00:12:46.840 SPDK bdev Controller (SPDK2 ) core 1: 9365.67 IO/s 10.68 secs/100000 ios 00:12:46.840 SPDK bdev Controller (SPDK2 ) core 2: 9778.33 IO/s 10.23 secs/100000 ios 00:12:46.840 SPDK bdev Controller (SPDK2 ) core 3: 10124.67 IO/s 9.88 secs/100000 ios 00:12:46.840 ======================================================== 00:12:46.840 00:12:46.840 21:28:09 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:46.840 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.840 [2024-04-24 21:28:09.649939] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:46.840 [2024-04-24 21:28:09.660010] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:46.840 Initializing NVMe Controllers 00:12:46.840 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:46.840 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:46.840 Namespace ID: 1 size: 0GB 00:12:46.840 Initialization complete. 00:12:46.840 INFO: using host memory buffer for IO 00:12:46.840 Hello world! 00:12:46.840 21:28:09 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:47.100 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.100 [2024-04-24 21:28:09.931165] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:48.480 Initializing NVMe Controllers 00:12:48.480 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:48.480 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:48.480 Initialization complete. Launching workers. 00:12:48.480 submit (in ns) avg, min, max = 7122.8, 3022.4, 3999517.6 00:12:48.480 complete (in ns) avg, min, max = 18612.4, 1662.4, 6990532.0 00:12:48.480 00:12:48.480 Submit histogram 00:12:48.480 ================ 00:12:48.480 Range in us Cumulative Count 00:12:48.480 3.021 - 3.034: 0.0058% ( 1) 00:12:48.480 3.034 - 3.046: 0.0115% ( 1) 00:12:48.480 3.046 - 3.059: 0.0980% ( 15) 00:12:48.480 3.059 - 3.072: 0.6055% ( 88) 00:12:48.480 3.072 - 3.085: 1.9493% ( 233) 00:12:48.480 3.085 - 3.098: 4.2157% ( 393) 00:12:48.480 3.098 - 3.110: 7.1972% ( 517) 00:12:48.480 3.110 - 3.123: 10.9862% ( 657) 00:12:48.480 3.123 - 3.136: 16.2630% ( 915) 00:12:48.480 3.136 - 3.149: 21.6782% ( 939) 00:12:48.480 3.149 - 3.162: 27.8143% ( 1064) 00:12:48.480 3.162 - 3.174: 33.9216% ( 1059) 00:12:48.480 3.174 - 3.187: 40.9054% ( 1211) 00:12:48.480 3.187 - 3.200: 47.9354% ( 1219) 00:12:48.480 3.200 - 3.213: 52.6701% ( 821) 00:12:48.480 3.213 - 3.226: 55.7266% ( 530) 00:12:48.480 3.226 - 3.238: 58.2584% ( 439) 00:12:48.480 3.238 - 3.251: 61.2399% ( 517) 00:12:48.480 3.251 - 3.264: 63.7889% ( 442) 00:12:48.480 3.264 - 3.277: 65.9746% ( 379) 00:12:48.480 3.277 - 3.302: 70.6113% ( 804) 00:12:48.480 3.302 - 3.328: 79.0254% ( 1459) 00:12:48.480 3.328 - 3.354: 84.0600% ( 873) 00:12:48.480 3.354 - 3.379: 86.1822% ( 368) 00:12:48.480 3.379 - 3.405: 87.2780% ( 190) 00:12:48.480 3.405 - 3.430: 88.2699% ( 172) 00:12:48.480 3.430 - 3.456: 89.8789% ( 279) 00:12:48.480 3.456 - 3.482: 91.5802% ( 295) 00:12:48.480 3.482 - 3.507: 93.3449% ( 306) 00:12:48.480 3.507 - 3.533: 94.6021% ( 218) 00:12:48.480 3.533 - 3.558: 95.6863% ( 188) 00:12:48.480 3.558 - 3.584: 97.0588% ( 238) 00:12:48.480 3.584 - 3.610: 98.0161% ( 166) 00:12:48.480 3.610 - 3.635: 98.5986% ( 101) 00:12:48.480 3.635 - 3.661: 98.8985% ( 52) 00:12:48.480 3.661 - 3.686: 99.1580% ( 45) 00:12:48.480 3.686 - 3.712: 99.3253% ( 29) 00:12:48.480 3.712 - 3.738: 99.3772% ( 9) 00:12:48.480 3.738 - 3.763: 99.3887% ( 2) 00:12:48.480 3.763 - 3.789: 99.4118% ( 4) 00:12:48.480 3.789 - 3.814: 99.4464% ( 6) 00:12:48.480 3.814 - 3.840: 99.4579% ( 2) 00:12:48.480 3.840 - 3.866: 99.4694% ( 2) 00:12:48.480 3.866 - 3.891: 99.4752% ( 1) 00:12:48.480 3.891 - 3.917: 99.4810% ( 1) 00:12:48.480 3.917 - 3.942: 99.4983% ( 3) 00:12:48.480 3.942 - 3.968: 99.5040% ( 1) 00:12:48.480 3.968 - 3.994: 99.5098% ( 1) 00:12:48.480 3.994 - 4.019: 99.5156% ( 1) 00:12:48.480 4.096 - 4.122: 99.5213% ( 1) 00:12:48.480 4.173 - 4.198: 99.5271% ( 1) 00:12:48.480 4.250 - 4.275: 99.5329% ( 1) 00:12:48.480 4.301 - 4.326: 99.5444% ( 2) 00:12:48.480 4.429 - 4.454: 99.5502% ( 1) 00:12:48.480 4.506 - 4.531: 99.5559% ( 1) 00:12:48.480 4.582 - 4.608: 99.5617% ( 1) 00:12:48.480 4.685 - 4.710: 99.5675% ( 1) 00:12:48.480 4.710 - 4.736: 99.5790% ( 2) 00:12:48.480 4.992 - 5.018: 99.5848% ( 1) 00:12:48.480 5.504 - 5.530: 99.5905% ( 1) 00:12:48.480 5.811 - 5.837: 99.5963% ( 1) 00:12:48.480 6.144 - 6.170: 99.6021% ( 1) 00:12:48.480 6.195 - 6.221: 99.6078% ( 1) 00:12:48.480 6.323 - 6.349: 99.6136% ( 1) 00:12:48.480 6.374 - 6.400: 99.6194% ( 1) 00:12:48.480 6.477 - 6.502: 99.6251% ( 1) 00:12:48.480 6.528 - 6.554: 99.6309% ( 1) 00:12:48.480 6.554 - 6.605: 99.6424% ( 2) 00:12:48.480 6.656 - 6.707: 99.6482% ( 1) 00:12:48.480 6.758 - 6.810: 99.6540% ( 1) 00:12:48.480 6.810 - 6.861: 99.6770% ( 4) 00:12:48.480 6.861 - 6.912: 99.7001% ( 4) 00:12:48.480 6.912 - 6.963: 99.7116% ( 2) 00:12:48.480 7.014 - 7.066: 99.7405% ( 5) 00:12:48.480 7.168 - 7.219: 99.7520% ( 2) 00:12:48.480 7.322 - 7.373: 99.7578% ( 1) 00:12:48.480 7.424 - 7.475: 99.7751% ( 3) 00:12:48.480 7.526 - 7.578: 99.7924% ( 3) 00:12:48.480 7.578 - 7.629: 99.8039% ( 2) 00:12:48.480 7.629 - 7.680: 99.8212% ( 3) 00:12:48.480 7.834 - 7.885: 99.8270% ( 1) 00:12:48.480 7.987 - 8.038: 99.8328% ( 1) 00:12:48.480 8.090 - 8.141: 99.8385% ( 1) 00:12:48.480 8.192 - 8.243: 99.8443% ( 1) 00:12:48.480 8.653 - 8.704: 99.8501% ( 1) 00:12:48.480 9.011 - 9.062: 99.8558% ( 1) 00:12:48.480 9.216 - 9.267: 99.8616% ( 1) 00:12:48.480 9.318 - 9.370: 99.8674% ( 1) 00:12:48.480 9.574 - 9.626: 99.8731% ( 1) 00:12:48.480 11.059 - 11.110: 99.8789% ( 1) 00:12:48.480 13.210 - 13.312: 99.8847% ( 1) 00:12:48.480 13.517 - 13.619: 99.8904% ( 1) 00:12:48.480 15.667 - 15.770: 99.8962% ( 1) 00:12:48.480 16.179 - 16.282: 99.9020% ( 1) 00:12:48.480 3001.549 - 3014.656: 99.9077% ( 1) 00:12:48.480 3984.589 - 4010.803: 100.0000% ( 16) 00:12:48.480 00:12:48.480 Complete histogram 00:12:48.480 ================== 00:12:48.480 Range in us Cumulative Count 00:12:48.480 1.651 - 1.664: 0.0058% ( 1) 00:12:48.480 1.664 - 1.677: 0.1557% ( 26) 00:12:48.480 1.677 - 1.690: 0.2076% ( 9) 00:12:48.480 1.690 - 1.702: 0.5017% ( 51) 00:12:48.480 1.702 - 1.715: 19.9942% ( 3380) 00:12:48.480 1.715 - 1.728: 69.3483% ( 8558) 00:12:48.480 1.728 - 1.741: 83.6621% ( 2482) 00:12:48.480 1.741 - 1.754: 89.1407% ( 950) 00:12:48.480 1.754 - 1.766: 92.1972% ( 530) 00:12:48.480 1.766 - 1.779: 94.3137% ( 367) 00:12:48.480 1.779 - 1.792: 95.8420% ( 265) 00:12:48.480 1.792 - 1.805: 97.0531% ( 210) 00:12:48.480 1.805 - 1.818: 97.6528% ( 104) 00:12:48.480 1.818 - 1.830: 97.8950% ( 42) 00:12:48.480 1.830 - 1.843: 98.1373% ( 42) 00:12:48.480 1.843 - 1.856: 98.2065% ( 12) 00:12:48.480 1.856 - 1.869: 98.3218% ( 20) 00:12:48.480 1.869 - 1.882: 98.6044% ( 49) 00:12:48.480 1.882 - 1.894: 98.9273% ( 56) 00:12:48.480 1.894 - 1.907: 99.0427% ( 20) 00:12:48.480 1.907 - 1.920: 99.1292% ( 15) 00:12:48.480 1.920 - 1.933: 99.1926% ( 11) 00:12:48.480 1.933 - 1.946: 99.2042% ( 2) 00:12:48.480 1.958 - 1.971: 99.2157% ( 2) 00:12:48.480 1.971 - 1.984: 99.2272% ( 2) 00:12:48.480 1.984 - 1.997: 99.2388% ( 2) 00:12:48.480 2.010 - 2.022: 99.2445% ( 1) 00:12:48.480 2.022 - 2.035: 99.2503% ( 1) 00:12:48.480 2.035 - 2.048: 99.2561% ( 1) 00:12:48.480 2.048 - 2.061: 99.2676% ( 2) 00:12:48.480 2.074 - 2.086: 99.2734% ( 1) 00:12:48.480 2.086 - 2.099: 99.2907% ( 3) 00:12:48.480 2.099 - 2.112: 99.3080% ( 3) 00:12:48.480 2.112 - 2.125: 99.3253% ( 3) 00:12:48.480 2.125 - 2.138: 99.3310% ( 1) 00:12:48.480 2.138 - 2.150: 99.3426% ( 2) 00:12:48.480 2.266 - 2.278: 99.3483% ( 1) 00:12:48.480 2.278 - 2.291: 99.3541% ( 1) 00:12:48.480 2.355 - 2.368: 99.3599% ( 1) 00:12:48.480 2.368 - 2.381: 99.3656% ( 1) 00:12:48.480 2.445 - 2.458: 99.3772% ( 2) 00:12:48.480 2.470 - 2.483: 99.3829% ( 1) 00:12:48.480 2.534 - 2.547: 99.3887% ( 1) 00:12:48.480 2.573 - 2.586: 99.3945% ( 1) 00:12:48.480 2.611 - 2.624: 99.4002% ( 1) 00:12:48.480 2.662 - 2.675: 99.4060% ( 1) 00:12:48.480 2.790 - 2.803: 99.4118% ( 1) 00:12:48.480 2.803 - 2.816: 99.4175% ( 1) 00:12:48.480 3.763 - 3.789: 99.4233% ( 1) 00:12:48.481 3.942 - 3.968: 99.4291% ( 1) 00:12:48.481 4.685 - 4.710: 99.4348% ( 1) 00:12:48.481 4.864 - 4.890: 99.4406% ( 1) 00:12:48.481 5.069 - 5.094: 99.4464% ( 1) 00:12:48.481 5.171 - 5.197: 99.4521% ( 1) 00:12:48.481 5.402 - 5.427: 99.4579% ( 1) 00:12:48.481 5.555 - 5.581: 99.4694% ( 2) 00:12:48.481 5.581 - 5.606: 99.4752% ( 1) 00:12:48.481 5.632 - 5.658: 99.4810% ( 1) 00:12:48.481 5.658 - 5.683: 99.4867% ( 1) 00:12:48.481 5.734 - 5.760: 99.4925% ( 1) 00:12:48.481 5.760 - 5.786: 99.4983% ( 1) 00:12:48.481 5.811 - 5.837: 99.5040% ( 1) 00:12:48.481 5.837 - 5.862: 99.5098% ( 1) 00:12:48.481 5.888 - 5.914: 99.5156% ( 1) 00:12:48.481 5.965 - 5.990: 99.5213% ( 1) 00:12:48.481 6.016 - 6.042: 99.5271% ( 1) 00:12:48.481 6.246 - 6.272: 99.5386% ( 2) 00:12:48.481 6.272 - 6.298: 99.5444% ( 1) 00:12:48.481 7.373 - 7.4[2024-04-24 21:28:11.029339] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:48.481 24: 99.5502% ( 1) 00:12:48.481 8.653 - 8.704: 99.5559% ( 1) 00:12:48.481 9.626 - 9.677: 99.5617% ( 1) 00:12:48.481 11.264 - 11.315: 99.5675% ( 1) 00:12:48.481 11.520 - 11.571: 99.5732% ( 1) 00:12:48.481 1035.469 - 1042.022: 99.5790% ( 1) 00:12:48.481 1199.309 - 1205.862: 99.5848% ( 1) 00:12:48.481 3670.016 - 3696.230: 99.5905% ( 1) 00:12:48.481 3984.589 - 4010.803: 99.9942% ( 70) 00:12:48.481 6973.030 - 7025.459: 100.0000% ( 1) 00:12:48.481 00:12:48.481 21:28:11 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:48.481 21:28:11 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:48.481 21:28:11 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:48.481 21:28:11 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:48.481 21:28:11 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:48.481 [ 00:12:48.481 { 00:12:48.481 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:48.481 "subtype": "Discovery", 00:12:48.481 "listen_addresses": [], 00:12:48.481 "allow_any_host": true, 00:12:48.481 "hosts": [] 00:12:48.481 }, 00:12:48.481 { 00:12:48.481 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:48.481 "subtype": "NVMe", 00:12:48.481 "listen_addresses": [ 00:12:48.481 { 00:12:48.481 "transport": "VFIOUSER", 00:12:48.481 "trtype": "VFIOUSER", 00:12:48.481 "adrfam": "IPv4", 00:12:48.481 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:48.481 "trsvcid": "0" 00:12:48.481 } 00:12:48.481 ], 00:12:48.481 "allow_any_host": true, 00:12:48.481 "hosts": [], 00:12:48.481 "serial_number": "SPDK1", 00:12:48.481 "model_number": "SPDK bdev Controller", 00:12:48.481 "max_namespaces": 32, 00:12:48.481 "min_cntlid": 1, 00:12:48.481 "max_cntlid": 65519, 00:12:48.481 "namespaces": [ 00:12:48.481 { 00:12:48.481 "nsid": 1, 00:12:48.481 "bdev_name": "Malloc1", 00:12:48.481 "name": "Malloc1", 00:12:48.481 "nguid": "B66C3BB98ACE4059AC4C3B96C34F605E", 00:12:48.481 "uuid": "b66c3bb9-8ace-4059-ac4c-3b96c34f605e" 00:12:48.481 }, 00:12:48.481 { 00:12:48.481 "nsid": 2, 00:12:48.481 "bdev_name": "Malloc3", 00:12:48.481 "name": "Malloc3", 00:12:48.481 "nguid": "AA57F9DCB4A345CE8305C523EA16C702", 00:12:48.481 "uuid": "aa57f9dc-b4a3-45ce-8305-c523ea16c702" 00:12:48.481 } 00:12:48.481 ] 00:12:48.481 }, 00:12:48.481 { 00:12:48.481 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:48.481 "subtype": "NVMe", 00:12:48.481 "listen_addresses": [ 00:12:48.481 { 00:12:48.481 "transport": "VFIOUSER", 00:12:48.481 "trtype": "VFIOUSER", 00:12:48.481 "adrfam": "IPv4", 00:12:48.481 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:48.481 "trsvcid": "0" 00:12:48.481 } 00:12:48.481 ], 00:12:48.481 "allow_any_host": true, 00:12:48.481 "hosts": [], 00:12:48.481 "serial_number": "SPDK2", 00:12:48.481 "model_number": "SPDK bdev Controller", 00:12:48.481 "max_namespaces": 32, 00:12:48.481 "min_cntlid": 1, 00:12:48.481 "max_cntlid": 65519, 00:12:48.481 "namespaces": [ 00:12:48.481 { 00:12:48.481 "nsid": 1, 00:12:48.481 "bdev_name": "Malloc2", 00:12:48.481 "name": "Malloc2", 00:12:48.481 "nguid": "CB3A964D56BF469BB2CF8CD5DF7AB53C", 00:12:48.481 "uuid": "cb3a964d-56bf-469b-b2cf-8cd5df7ab53c" 00:12:48.481 } 00:12:48.481 ] 00:12:48.481 } 00:12:48.481 ] 00:12:48.481 21:28:11 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:48.481 21:28:11 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:48.481 21:28:11 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2795108 00:12:48.481 21:28:11 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:48.481 21:28:11 -- common/autotest_common.sh@1251 -- # local i=0 00:12:48.481 21:28:11 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:48.481 21:28:11 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:48.481 21:28:11 -- common/autotest_common.sh@1262 -- # return 0 00:12:48.481 21:28:11 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:48.481 21:28:11 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:48.481 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.740 [2024-04-24 21:28:11.402277] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:48.740 Malloc4 00:12:48.740 21:28:11 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:48.740 [2024-04-24 21:28:11.610779] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:49.000 21:28:11 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:49.000 Asynchronous Event Request test 00:12:49.000 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.000 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.000 Registering asynchronous event callbacks... 00:12:49.000 Starting namespace attribute notice tests for all controllers... 00:12:49.000 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:49.000 aer_cb - Changed Namespace 00:12:49.000 Cleaning up... 00:12:49.000 [ 00:12:49.000 { 00:12:49.000 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:49.000 "subtype": "Discovery", 00:12:49.000 "listen_addresses": [], 00:12:49.000 "allow_any_host": true, 00:12:49.000 "hosts": [] 00:12:49.000 }, 00:12:49.000 { 00:12:49.000 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:49.000 "subtype": "NVMe", 00:12:49.000 "listen_addresses": [ 00:12:49.000 { 00:12:49.000 "transport": "VFIOUSER", 00:12:49.000 "trtype": "VFIOUSER", 00:12:49.000 "adrfam": "IPv4", 00:12:49.000 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:49.000 "trsvcid": "0" 00:12:49.000 } 00:12:49.000 ], 00:12:49.000 "allow_any_host": true, 00:12:49.000 "hosts": [], 00:12:49.000 "serial_number": "SPDK1", 00:12:49.000 "model_number": "SPDK bdev Controller", 00:12:49.000 "max_namespaces": 32, 00:12:49.000 "min_cntlid": 1, 00:12:49.000 "max_cntlid": 65519, 00:12:49.000 "namespaces": [ 00:12:49.000 { 00:12:49.000 "nsid": 1, 00:12:49.000 "bdev_name": "Malloc1", 00:12:49.000 "name": "Malloc1", 00:12:49.000 "nguid": "B66C3BB98ACE4059AC4C3B96C34F605E", 00:12:49.000 "uuid": "b66c3bb9-8ace-4059-ac4c-3b96c34f605e" 00:12:49.000 }, 00:12:49.000 { 00:12:49.000 "nsid": 2, 00:12:49.000 "bdev_name": "Malloc3", 00:12:49.000 "name": "Malloc3", 00:12:49.000 "nguid": "AA57F9DCB4A345CE8305C523EA16C702", 00:12:49.000 "uuid": "aa57f9dc-b4a3-45ce-8305-c523ea16c702" 00:12:49.000 } 00:12:49.000 ] 00:12:49.000 }, 00:12:49.000 { 00:12:49.000 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:49.000 "subtype": "NVMe", 00:12:49.000 "listen_addresses": [ 00:12:49.000 { 00:12:49.000 "transport": "VFIOUSER", 00:12:49.000 "trtype": "VFIOUSER", 00:12:49.000 "adrfam": "IPv4", 00:12:49.000 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:49.000 "trsvcid": "0" 00:12:49.000 } 00:12:49.000 ], 00:12:49.000 "allow_any_host": true, 00:12:49.000 "hosts": [], 00:12:49.000 "serial_number": "SPDK2", 00:12:49.000 "model_number": "SPDK bdev Controller", 00:12:49.000 "max_namespaces": 32, 00:12:49.000 "min_cntlid": 1, 00:12:49.000 "max_cntlid": 65519, 00:12:49.000 "namespaces": [ 00:12:49.000 { 00:12:49.000 "nsid": 1, 00:12:49.000 "bdev_name": "Malloc2", 00:12:49.000 "name": "Malloc2", 00:12:49.000 "nguid": "CB3A964D56BF469BB2CF8CD5DF7AB53C", 00:12:49.000 "uuid": "cb3a964d-56bf-469b-b2cf-8cd5df7ab53c" 00:12:49.000 }, 00:12:49.000 { 00:12:49.000 "nsid": 2, 00:12:49.000 "bdev_name": "Malloc4", 00:12:49.000 "name": "Malloc4", 00:12:49.000 "nguid": "68482CD2EDFF4EBE86A93155D08824C7", 00:12:49.000 "uuid": "68482cd2-edff-4ebe-86a9-3155d08824c7" 00:12:49.000 } 00:12:49.000 ] 00:12:49.000 } 00:12:49.000 ] 00:12:49.000 21:28:11 -- target/nvmf_vfio_user.sh@44 -- # wait 2795108 00:12:49.000 21:28:11 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:49.000 21:28:11 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2787086 00:12:49.000 21:28:11 -- common/autotest_common.sh@936 -- # '[' -z 2787086 ']' 00:12:49.000 21:28:11 -- common/autotest_common.sh@940 -- # kill -0 2787086 00:12:49.000 21:28:11 -- common/autotest_common.sh@941 -- # uname 00:12:49.000 21:28:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:49.000 21:28:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2787086 00:12:49.000 21:28:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:49.000 21:28:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:49.000 21:28:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2787086' 00:12:49.000 killing process with pid 2787086 00:12:49.000 21:28:11 -- common/autotest_common.sh@955 -- # kill 2787086 00:12:49.000 [2024-04-24 21:28:11.862328] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:12:49.000 21:28:11 -- common/autotest_common.sh@960 -- # wait 2787086 00:12:49.260 21:28:12 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:49.260 21:28:12 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:49.260 21:28:12 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:49.260 21:28:12 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:49.260 21:28:12 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:49.519 21:28:12 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2795298 00:12:49.519 21:28:12 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2795298' 00:12:49.519 Process pid: 2795298 00:12:49.520 21:28:12 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:49.520 21:28:12 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:49.520 21:28:12 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2795298 00:12:49.520 21:28:12 -- common/autotest_common.sh@817 -- # '[' -z 2795298 ']' 00:12:49.520 21:28:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.520 21:28:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:49.520 21:28:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.520 21:28:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:49.520 21:28:12 -- common/autotest_common.sh@10 -- # set +x 00:12:49.520 [2024-04-24 21:28:12.195876] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:49.520 [2024-04-24 21:28:12.196810] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:12:49.520 [2024-04-24 21:28:12.196848] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.520 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.520 [2024-04-24 21:28:12.268462] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.520 [2024-04-24 21:28:12.336458] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.520 [2024-04-24 21:28:12.336502] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.520 [2024-04-24 21:28:12.336513] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.520 [2024-04-24 21:28:12.336522] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.520 [2024-04-24 21:28:12.336529] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.520 [2024-04-24 21:28:12.336584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.520 [2024-04-24 21:28:12.336680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.520 [2024-04-24 21:28:12.336764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.520 [2024-04-24 21:28:12.336766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.779 [2024-04-24 21:28:12.422302] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:12:49.779 [2024-04-24 21:28:12.422441] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:12:49.779 [2024-04-24 21:28:12.422611] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:12:49.779 [2024-04-24 21:28:12.423081] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:49.779 [2024-04-24 21:28:12.423175] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:12:50.347 21:28:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:50.347 21:28:12 -- common/autotest_common.sh@850 -- # return 0 00:12:50.347 21:28:12 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:51.294 21:28:13 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:51.294 21:28:14 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:51.574 21:28:14 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:51.574 21:28:14 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:51.574 21:28:14 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:51.574 21:28:14 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:51.574 Malloc1 00:12:51.574 21:28:14 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:51.832 21:28:14 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:51.832 21:28:14 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:52.091 21:28:14 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:52.091 21:28:14 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:52.091 21:28:14 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:52.349 Malloc2 00:12:52.349 21:28:15 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:52.607 21:28:15 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:52.607 21:28:15 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:52.866 21:28:15 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:52.866 21:28:15 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2795298 00:12:52.866 21:28:15 -- common/autotest_common.sh@936 -- # '[' -z 2795298 ']' 00:12:52.866 21:28:15 -- common/autotest_common.sh@940 -- # kill -0 2795298 00:12:52.866 21:28:15 -- common/autotest_common.sh@941 -- # uname 00:12:52.866 21:28:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:52.866 21:28:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2795298 00:12:52.866 21:28:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:52.866 21:28:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:52.866 21:28:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2795298' 00:12:52.866 killing process with pid 2795298 00:12:52.866 21:28:15 -- common/autotest_common.sh@955 -- # kill 2795298 00:12:52.866 21:28:15 -- common/autotest_common.sh@960 -- # wait 2795298 00:12:53.125 [2024-04-24 21:28:15.813894] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:12:53.125 21:28:15 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:53.125 21:28:15 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:53.125 00:12:53.125 real 0m51.501s 00:12:53.125 user 3m22.612s 00:12:53.125 sys 0m4.658s 00:12:53.125 21:28:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:53.125 21:28:15 -- common/autotest_common.sh@10 -- # set +x 00:12:53.125 ************************************ 00:12:53.125 END TEST nvmf_vfio_user 00:12:53.125 ************************************ 00:12:53.125 21:28:15 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:53.125 21:28:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:53.125 21:28:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:53.125 21:28:15 -- common/autotest_common.sh@10 -- # set +x 00:12:53.384 ************************************ 00:12:53.384 START TEST nvmf_vfio_user_nvme_compliance 00:12:53.384 ************************************ 00:12:53.385 21:28:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:53.385 * Looking for test storage... 00:12:53.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:53.385 21:28:16 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.385 21:28:16 -- nvmf/common.sh@7 -- # uname -s 00:12:53.385 21:28:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.385 21:28:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.385 21:28:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.385 21:28:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.385 21:28:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.385 21:28:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.385 21:28:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.385 21:28:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.385 21:28:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.385 21:28:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.385 21:28:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:53.385 21:28:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:53.385 21:28:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.385 21:28:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.385 21:28:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.385 21:28:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.385 21:28:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:53.385 21:28:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.385 21:28:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.385 21:28:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.385 21:28:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.385 21:28:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.385 21:28:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.385 21:28:16 -- paths/export.sh@5 -- # export PATH 00:12:53.385 21:28:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.385 21:28:16 -- nvmf/common.sh@47 -- # : 0 00:12:53.385 21:28:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:53.385 21:28:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:53.385 21:28:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.385 21:28:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.385 21:28:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.385 21:28:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:53.385 21:28:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:53.385 21:28:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:53.385 21:28:16 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:53.385 21:28:16 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:53.385 21:28:16 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:53.385 21:28:16 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:53.385 21:28:16 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:53.385 21:28:16 -- compliance/compliance.sh@20 -- # nvmfpid=2796001 00:12:53.385 21:28:16 -- compliance/compliance.sh@21 -- # echo 'Process pid: 2796001' 00:12:53.385 Process pid: 2796001 00:12:53.385 21:28:16 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:53.385 21:28:16 -- compliance/compliance.sh@24 -- # waitforlisten 2796001 00:12:53.385 21:28:16 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:53.385 21:28:16 -- common/autotest_common.sh@817 -- # '[' -z 2796001 ']' 00:12:53.385 21:28:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.385 21:28:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:53.385 21:28:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.385 21:28:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:53.385 21:28:16 -- common/autotest_common.sh@10 -- # set +x 00:12:53.643 [2024-04-24 21:28:16.299664] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:12:53.643 [2024-04-24 21:28:16.299714] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.643 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.643 [2024-04-24 21:28:16.369820] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:53.643 [2024-04-24 21:28:16.442065] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.643 [2024-04-24 21:28:16.442102] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.643 [2024-04-24 21:28:16.442111] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.643 [2024-04-24 21:28:16.442119] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.643 [2024-04-24 21:28:16.442126] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.643 [2024-04-24 21:28:16.442174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.643 [2024-04-24 21:28:16.442270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.643 [2024-04-24 21:28:16.442273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.578 21:28:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:54.578 21:28:17 -- common/autotest_common.sh@850 -- # return 0 00:12:54.578 21:28:17 -- compliance/compliance.sh@26 -- # sleep 1 00:12:55.512 21:28:18 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:55.512 21:28:18 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:55.512 21:28:18 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:55.512 21:28:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.512 21:28:18 -- common/autotest_common.sh@10 -- # set +x 00:12:55.512 21:28:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.512 21:28:18 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:55.512 21:28:18 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:55.512 21:28:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.512 21:28:18 -- common/autotest_common.sh@10 -- # set +x 00:12:55.512 malloc0 00:12:55.512 21:28:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.512 21:28:18 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:55.512 21:28:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.512 21:28:18 -- common/autotest_common.sh@10 -- # set +x 00:12:55.512 21:28:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.512 21:28:18 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:55.512 21:28:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.512 21:28:18 -- common/autotest_common.sh@10 -- # set +x 00:12:55.512 21:28:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.512 21:28:18 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:55.512 21:28:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.512 21:28:18 -- common/autotest_common.sh@10 -- # set +x 00:12:55.512 21:28:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.512 21:28:18 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:55.512 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.512 00:12:55.512 00:12:55.512 CUnit - A unit testing framework for C - Version 2.1-3 00:12:55.513 http://cunit.sourceforge.net/ 00:12:55.513 00:12:55.513 00:12:55.513 Suite: nvme_compliance 00:12:55.513 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-24 21:28:18.352901] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.513 [2024-04-24 21:28:18.354247] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:55.513 [2024-04-24 21:28:18.354262] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:55.513 [2024-04-24 21:28:18.354270] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:55.513 [2024-04-24 21:28:18.355917] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.513 passed 00:12:55.771 Test: admin_identify_ctrlr_verify_fused ...[2024-04-24 21:28:18.434430] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.771 [2024-04-24 21:28:18.437459] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.771 passed 00:12:55.771 Test: admin_identify_ns ...[2024-04-24 21:28:18.514604] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:55.771 [2024-04-24 21:28:18.575462] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:55.771 [2024-04-24 21:28:18.583459] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:55.771 [2024-04-24 21:28:18.604575] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:55.771 passed 00:12:56.029 Test: admin_get_features_mandatory_features ...[2024-04-24 21:28:18.679844] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.029 [2024-04-24 21:28:18.682861] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.029 passed 00:12:56.029 Test: admin_get_features_optional_features ...[2024-04-24 21:28:18.759328] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.029 [2024-04-24 21:28:18.762357] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.029 passed 00:12:56.029 Test: admin_set_features_number_of_queues ...[2024-04-24 21:28:18.837871] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.287 [2024-04-24 21:28:18.942542] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.287 passed 00:12:56.287 Test: admin_get_log_page_mandatory_logs ...[2024-04-24 21:28:19.016016] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.287 [2024-04-24 21:28:19.019036] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.287 passed 00:12:56.287 Test: admin_get_log_page_with_lpo ...[2024-04-24 21:28:19.096598] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.287 [2024-04-24 21:28:19.165465] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:56.546 [2024-04-24 21:28:19.178526] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.546 passed 00:12:56.546 Test: fabric_property_get ...[2024-04-24 21:28:19.251022] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.546 [2024-04-24 21:28:19.252249] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:56.547 [2024-04-24 21:28:19.254042] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.547 passed 00:12:56.547 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-24 21:28:19.330553] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.547 [2024-04-24 21:28:19.331785] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:56.547 [2024-04-24 21:28:19.333576] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.547 passed 00:12:56.547 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-24 21:28:19.408532] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.805 [2024-04-24 21:28:19.494461] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:56.805 [2024-04-24 21:28:19.510461] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:56.805 [2024-04-24 21:28:19.515548] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.805 passed 00:12:56.805 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-24 21:28:19.588990] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:56.805 [2024-04-24 21:28:19.590212] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:56.805 [2024-04-24 21:28:19.592009] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:56.805 passed 00:12:56.805 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-24 21:28:19.668648] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.063 [2024-04-24 21:28:19.748470] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:57.063 [2024-04-24 21:28:19.772459] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:57.063 [2024-04-24 21:28:19.777556] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.063 passed 00:12:57.063 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-24 21:28:19.850098] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.063 [2024-04-24 21:28:19.851321] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:57.063 [2024-04-24 21:28:19.851347] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:57.063 [2024-04-24 21:28:19.853114] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.063 passed 00:12:57.063 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-24 21:28:19.929660] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.321 [2024-04-24 21:28:20.022473] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:57.321 [2024-04-24 21:28:20.030458] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:57.321 [2024-04-24 21:28:20.038464] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:57.321 [2024-04-24 21:28:20.046460] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:57.321 [2024-04-24 21:28:20.075609] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.321 passed 00:12:57.321 Test: admin_create_io_sq_verify_pc ...[2024-04-24 21:28:20.152315] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.321 [2024-04-24 21:28:20.169468] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:57.321 [2024-04-24 21:28:20.187539] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.579 passed 00:12:57.579 Test: admin_create_io_qp_max_qps ...[2024-04-24 21:28:20.262051] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.529 [2024-04-24 21:28:21.359461] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:59.095 [2024-04-24 21:28:21.759552] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.095 passed 00:12:59.095 Test: admin_create_io_sq_shared_cq ...[2024-04-24 21:28:21.832064] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.095 [2024-04-24 21:28:21.964455] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:59.353 [2024-04-24 21:28:22.001516] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.353 passed 00:12:59.353 00:12:59.353 Run Summary: Type Total Ran Passed Failed Inactive 00:12:59.353 suites 1 1 n/a 0 0 00:12:59.353 tests 18 18 18 0 0 00:12:59.353 asserts 360 360 360 0 n/a 00:12:59.353 00:12:59.353 Elapsed time = 1.498 seconds 00:12:59.353 21:28:22 -- compliance/compliance.sh@42 -- # killprocess 2796001 00:12:59.354 21:28:22 -- common/autotest_common.sh@936 -- # '[' -z 2796001 ']' 00:12:59.354 21:28:22 -- common/autotest_common.sh@940 -- # kill -0 2796001 00:12:59.354 21:28:22 -- common/autotest_common.sh@941 -- # uname 00:12:59.354 21:28:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:59.354 21:28:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2796001 00:12:59.354 21:28:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:59.354 21:28:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:59.354 21:28:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2796001' 00:12:59.354 killing process with pid 2796001 00:12:59.354 21:28:22 -- common/autotest_common.sh@955 -- # kill 2796001 00:12:59.354 21:28:22 -- common/autotest_common.sh@960 -- # wait 2796001 00:12:59.612 21:28:22 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:59.612 21:28:22 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:59.612 00:12:59.612 real 0m6.205s 00:12:59.612 user 0m17.422s 00:12:59.612 sys 0m0.711s 00:12:59.612 21:28:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:59.612 21:28:22 -- common/autotest_common.sh@10 -- # set +x 00:12:59.612 ************************************ 00:12:59.612 END TEST nvmf_vfio_user_nvme_compliance 00:12:59.612 ************************************ 00:12:59.612 21:28:22 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:59.612 21:28:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:59.612 21:28:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:59.612 21:28:22 -- common/autotest_common.sh@10 -- # set +x 00:12:59.870 ************************************ 00:12:59.870 START TEST nvmf_vfio_user_fuzz 00:12:59.870 ************************************ 00:12:59.870 21:28:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:59.870 * Looking for test storage... 00:12:59.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.870 21:28:22 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.870 21:28:22 -- nvmf/common.sh@7 -- # uname -s 00:12:59.870 21:28:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.870 21:28:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.870 21:28:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.870 21:28:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.870 21:28:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.870 21:28:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.870 21:28:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.870 21:28:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.870 21:28:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.870 21:28:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.870 21:28:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:59.870 21:28:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:59.870 21:28:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.870 21:28:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.870 21:28:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.870 21:28:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.870 21:28:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.870 21:28:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.870 21:28:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.870 21:28:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.871 21:28:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.871 21:28:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.871 21:28:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.871 21:28:22 -- paths/export.sh@5 -- # export PATH 00:12:59.871 21:28:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.871 21:28:22 -- nvmf/common.sh@47 -- # : 0 00:12:59.871 21:28:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:59.871 21:28:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:59.871 21:28:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.871 21:28:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.871 21:28:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.871 21:28:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:59.871 21:28:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:59.871 21:28:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:59.871 21:28:22 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:59.871 21:28:22 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:59.871 21:28:22 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:59.871 21:28:22 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:59.871 21:28:22 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:59.871 21:28:22 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:59.871 21:28:22 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:59.871 21:28:22 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2797134 00:12:59.871 21:28:22 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:59.871 21:28:22 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2797134' 00:12:59.871 Process pid: 2797134 00:12:59.871 21:28:22 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:59.871 21:28:22 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2797134 00:12:59.871 21:28:22 -- common/autotest_common.sh@817 -- # '[' -z 2797134 ']' 00:12:59.871 21:28:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.871 21:28:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:59.871 21:28:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.871 21:28:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:59.871 21:28:22 -- common/autotest_common.sh@10 -- # set +x 00:13:00.804 21:28:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:00.804 21:28:23 -- common/autotest_common.sh@850 -- # return 0 00:13:00.804 21:28:23 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:01.748 21:28:24 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:01.748 21:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.748 21:28:24 -- common/autotest_common.sh@10 -- # set +x 00:13:01.748 21:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.748 21:28:24 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:01.748 21:28:24 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:01.748 21:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.748 21:28:24 -- common/autotest_common.sh@10 -- # set +x 00:13:01.748 malloc0 00:13:01.748 21:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.748 21:28:24 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:01.748 21:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.748 21:28:24 -- common/autotest_common.sh@10 -- # set +x 00:13:01.748 21:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.748 21:28:24 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:01.748 21:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.748 21:28:24 -- common/autotest_common.sh@10 -- # set +x 00:13:01.748 21:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.748 21:28:24 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:01.748 21:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.748 21:28:24 -- common/autotest_common.sh@10 -- # set +x 00:13:01.748 21:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.748 21:28:24 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:01.748 21:28:24 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:33.823 Fuzzing completed. Shutting down the fuzz application 00:13:33.823 00:13:33.823 Dumping successful admin opcodes: 00:13:33.823 8, 9, 10, 24, 00:13:33.823 Dumping successful io opcodes: 00:13:33.823 0, 00:13:33.823 NS: 0x200003a1ef00 I/O qp, Total commands completed: 894009, total successful commands: 3485, random_seed: 1838630528 00:13:33.823 NS: 0x200003a1ef00 admin qp, Total commands completed: 211283, total successful commands: 1699, random_seed: 1663801408 00:13:33.823 21:28:54 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:33.823 21:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.823 21:28:54 -- common/autotest_common.sh@10 -- # set +x 00:13:33.823 21:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.823 21:28:54 -- target/vfio_user_fuzz.sh@46 -- # killprocess 2797134 00:13:33.823 21:28:54 -- common/autotest_common.sh@936 -- # '[' -z 2797134 ']' 00:13:33.823 21:28:54 -- common/autotest_common.sh@940 -- # kill -0 2797134 00:13:33.823 21:28:54 -- common/autotest_common.sh@941 -- # uname 00:13:33.823 21:28:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:33.823 21:28:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2797134 00:13:33.823 21:28:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:33.823 21:28:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:33.823 21:28:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2797134' 00:13:33.823 killing process with pid 2797134 00:13:33.823 21:28:55 -- common/autotest_common.sh@955 -- # kill 2797134 00:13:33.823 21:28:55 -- common/autotest_common.sh@960 -- # wait 2797134 00:13:33.823 21:28:55 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:33.823 21:28:55 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:33.823 00:13:33.823 real 0m32.823s 00:13:33.823 user 0m29.739s 00:13:33.823 sys 0m31.677s 00:13:33.823 21:28:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:33.823 21:28:55 -- common/autotest_common.sh@10 -- # set +x 00:13:33.823 ************************************ 00:13:33.823 END TEST nvmf_vfio_user_fuzz 00:13:33.823 ************************************ 00:13:33.823 21:28:55 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:33.823 21:28:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:33.823 21:28:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:33.823 21:28:55 -- common/autotest_common.sh@10 -- # set +x 00:13:33.823 ************************************ 00:13:33.823 START TEST nvmf_host_management 00:13:33.823 ************************************ 00:13:33.823 21:28:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:33.823 * Looking for test storage... 00:13:33.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:33.823 21:28:55 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.823 21:28:55 -- nvmf/common.sh@7 -- # uname -s 00:13:33.823 21:28:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.823 21:28:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.823 21:28:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.823 21:28:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.823 21:28:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.823 21:28:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.823 21:28:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.823 21:28:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.823 21:28:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.823 21:28:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.823 21:28:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:33.823 21:28:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:33.823 21:28:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.823 21:28:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.823 21:28:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:33.823 21:28:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.823 21:28:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:33.823 21:28:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.823 21:28:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.823 21:28:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.823 21:28:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.823 21:28:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.823 21:28:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.823 21:28:55 -- paths/export.sh@5 -- # export PATH 00:13:33.823 21:28:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.823 21:28:55 -- nvmf/common.sh@47 -- # : 0 00:13:33.823 21:28:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:33.823 21:28:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:33.823 21:28:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:33.823 21:28:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.823 21:28:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.823 21:28:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:33.823 21:28:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:33.823 21:28:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:33.823 21:28:55 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:33.823 21:28:55 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:33.823 21:28:55 -- target/host_management.sh@105 -- # nvmftestinit 00:13:33.823 21:28:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:33.823 21:28:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.823 21:28:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:33.823 21:28:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:33.823 21:28:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:33.823 21:28:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.823 21:28:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.823 21:28:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.823 21:28:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:33.823 21:28:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:33.823 21:28:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:33.823 21:28:55 -- common/autotest_common.sh@10 -- # set +x 00:13:39.098 21:29:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:39.098 21:29:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:39.098 21:29:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:39.098 21:29:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:39.098 21:29:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:39.098 21:29:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:39.098 21:29:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:39.098 21:29:01 -- nvmf/common.sh@295 -- # net_devs=() 00:13:39.098 21:29:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:39.098 21:29:01 -- nvmf/common.sh@296 -- # e810=() 00:13:39.098 21:29:01 -- nvmf/common.sh@296 -- # local -ga e810 00:13:39.098 21:29:01 -- nvmf/common.sh@297 -- # x722=() 00:13:39.098 21:29:01 -- nvmf/common.sh@297 -- # local -ga x722 00:13:39.098 21:29:01 -- nvmf/common.sh@298 -- # mlx=() 00:13:39.098 21:29:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:39.098 21:29:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.098 21:29:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.098 21:29:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.098 21:29:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.098 21:29:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.098 21:29:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.098 21:29:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.098 21:29:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.098 21:29:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.098 21:29:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.098 21:29:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.098 21:29:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:39.098 21:29:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:39.098 21:29:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:39.098 21:29:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:39.098 21:29:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:39.098 21:29:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:39.098 21:29:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.098 21:29:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:39.098 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:39.098 21:29:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:39.098 21:29:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:39.098 21:29:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.098 21:29:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.098 21:29:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:39.098 21:29:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.098 21:29:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:39.098 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:39.098 21:29:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:39.098 21:29:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:39.098 21:29:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.098 21:29:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.098 21:29:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:39.098 21:29:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:39.098 21:29:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:39.098 21:29:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:39.098 21:29:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:39.098 21:29:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.098 21:29:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:39.098 21:29:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.098 21:29:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:39.098 Found net devices under 0000:af:00.0: cvl_0_0 00:13:39.098 21:29:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.098 21:29:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:39.098 21:29:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.098 21:29:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:39.098 21:29:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.098 21:29:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:39.098 Found net devices under 0000:af:00.1: cvl_0_1 00:13:39.098 21:29:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.098 21:29:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:39.098 21:29:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:39.098 21:29:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:39.098 21:29:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:39.099 21:29:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:39.099 21:29:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.099 21:29:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.099 21:29:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:39.099 21:29:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:39.099 21:29:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:39.099 21:29:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:39.099 21:29:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:39.099 21:29:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:39.099 21:29:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.099 21:29:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:39.099 21:29:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:39.099 21:29:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:39.099 21:29:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:39.358 21:29:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:39.358 21:29:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:39.358 21:29:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:39.358 21:29:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:39.358 21:29:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:39.358 21:29:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:39.358 21:29:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:39.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:13:39.358 00:13:39.358 --- 10.0.0.2 ping statistics --- 00:13:39.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.358 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:13:39.358 21:29:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:39.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:13:39.358 00:13:39.358 --- 10.0.0.1 ping statistics --- 00:13:39.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.358 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:13:39.358 21:29:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.358 21:29:02 -- nvmf/common.sh@411 -- # return 0 00:13:39.358 21:29:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:39.358 21:29:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.358 21:29:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:39.358 21:29:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:39.358 21:29:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.358 21:29:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:39.358 21:29:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:39.358 21:29:02 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:13:39.358 21:29:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:39.358 21:29:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:39.358 21:29:02 -- common/autotest_common.sh@10 -- # set +x 00:13:39.617 ************************************ 00:13:39.617 START TEST nvmf_host_management 00:13:39.617 ************************************ 00:13:39.617 21:29:02 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:13:39.617 21:29:02 -- target/host_management.sh@69 -- # starttarget 00:13:39.617 21:29:02 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:39.617 21:29:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:39.617 21:29:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:39.617 21:29:02 -- common/autotest_common.sh@10 -- # set +x 00:13:39.617 21:29:02 -- nvmf/common.sh@470 -- # nvmfpid=2806104 00:13:39.617 21:29:02 -- nvmf/common.sh@471 -- # waitforlisten 2806104 00:13:39.617 21:29:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:39.617 21:29:02 -- common/autotest_common.sh@817 -- # '[' -z 2806104 ']' 00:13:39.617 21:29:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.617 21:29:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:39.617 21:29:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.617 21:29:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:39.617 21:29:02 -- common/autotest_common.sh@10 -- # set +x 00:13:39.617 [2024-04-24 21:29:02.375477] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:13:39.617 [2024-04-24 21:29:02.375522] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.617 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.617 [2024-04-24 21:29:02.448604] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:39.877 [2024-04-24 21:29:02.523018] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.877 [2024-04-24 21:29:02.523054] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.877 [2024-04-24 21:29:02.523063] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.877 [2024-04-24 21:29:02.523071] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.877 [2024-04-24 21:29:02.523078] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.877 [2024-04-24 21:29:02.523179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.877 [2024-04-24 21:29:02.523261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:39.877 [2024-04-24 21:29:02.523373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.877 [2024-04-24 21:29:02.523374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:40.444 21:29:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:40.444 21:29:03 -- common/autotest_common.sh@850 -- # return 0 00:13:40.444 21:29:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:40.444 21:29:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:40.444 21:29:03 -- common/autotest_common.sh@10 -- # set +x 00:13:40.444 21:29:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.444 21:29:03 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:40.444 21:29:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.444 21:29:03 -- common/autotest_common.sh@10 -- # set +x 00:13:40.444 [2024-04-24 21:29:03.221283] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.444 21:29:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.444 21:29:03 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:40.444 21:29:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:40.444 21:29:03 -- common/autotest_common.sh@10 -- # set +x 00:13:40.444 21:29:03 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:40.444 21:29:03 -- target/host_management.sh@23 -- # cat 00:13:40.444 21:29:03 -- target/host_management.sh@30 -- # rpc_cmd 00:13:40.444 21:29:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.444 21:29:03 -- common/autotest_common.sh@10 -- # set +x 00:13:40.444 Malloc0 00:13:40.444 [2024-04-24 21:29:03.283738] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.444 21:29:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.444 21:29:03 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:40.444 21:29:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:40.444 21:29:03 -- common/autotest_common.sh@10 -- # set +x 00:13:40.703 21:29:03 -- target/host_management.sh@73 -- # perfpid=2806176 00:13:40.703 21:29:03 -- target/host_management.sh@74 -- # waitforlisten 2806176 /var/tmp/bdevperf.sock 00:13:40.703 21:29:03 -- common/autotest_common.sh@817 -- # '[' -z 2806176 ']' 00:13:40.703 21:29:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:40.703 21:29:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:40.703 21:29:03 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:40.703 21:29:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:40.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:40.703 21:29:03 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:40.703 21:29:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:40.703 21:29:03 -- nvmf/common.sh@521 -- # config=() 00:13:40.703 21:29:03 -- common/autotest_common.sh@10 -- # set +x 00:13:40.703 21:29:03 -- nvmf/common.sh@521 -- # local subsystem config 00:13:40.703 21:29:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:40.703 21:29:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:40.703 { 00:13:40.703 "params": { 00:13:40.703 "name": "Nvme$subsystem", 00:13:40.703 "trtype": "$TEST_TRANSPORT", 00:13:40.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:40.703 "adrfam": "ipv4", 00:13:40.703 "trsvcid": "$NVMF_PORT", 00:13:40.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:40.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:40.703 "hdgst": ${hdgst:-false}, 00:13:40.703 "ddgst": ${ddgst:-false} 00:13:40.703 }, 00:13:40.703 "method": "bdev_nvme_attach_controller" 00:13:40.703 } 00:13:40.703 EOF 00:13:40.703 )") 00:13:40.703 21:29:03 -- nvmf/common.sh@543 -- # cat 00:13:40.703 21:29:03 -- nvmf/common.sh@545 -- # jq . 00:13:40.703 21:29:03 -- nvmf/common.sh@546 -- # IFS=, 00:13:40.703 21:29:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:40.703 "params": { 00:13:40.703 "name": "Nvme0", 00:13:40.703 "trtype": "tcp", 00:13:40.703 "traddr": "10.0.0.2", 00:13:40.703 "adrfam": "ipv4", 00:13:40.703 "trsvcid": "4420", 00:13:40.703 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:40.703 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:40.703 "hdgst": false, 00:13:40.703 "ddgst": false 00:13:40.703 }, 00:13:40.703 "method": "bdev_nvme_attach_controller" 00:13:40.703 }' 00:13:40.703 [2024-04-24 21:29:03.385051] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:13:40.703 [2024-04-24 21:29:03.385101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2806176 ] 00:13:40.703 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.703 [2024-04-24 21:29:03.459510] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.703 [2024-04-24 21:29:03.538588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.962 Running I/O for 10 seconds... 00:13:41.532 21:29:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:41.532 21:29:04 -- common/autotest_common.sh@850 -- # return 0 00:13:41.532 21:29:04 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:41.532 21:29:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.532 21:29:04 -- common/autotest_common.sh@10 -- # set +x 00:13:41.532 21:29:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.532 21:29:04 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:41.532 21:29:04 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:41.532 21:29:04 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:41.532 21:29:04 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:41.532 21:29:04 -- target/host_management.sh@52 -- # local ret=1 00:13:41.532 21:29:04 -- target/host_management.sh@53 -- # local i 00:13:41.532 21:29:04 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:41.532 21:29:04 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:41.532 21:29:04 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:41.532 21:29:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.532 21:29:04 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:41.532 21:29:04 -- common/autotest_common.sh@10 -- # set +x 00:13:41.532 21:29:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.532 21:29:04 -- target/host_management.sh@55 -- # read_io_count=578 00:13:41.532 21:29:04 -- target/host_management.sh@58 -- # '[' 578 -ge 100 ']' 00:13:41.532 21:29:04 -- target/host_management.sh@59 -- # ret=0 00:13:41.532 21:29:04 -- target/host_management.sh@60 -- # break 00:13:41.532 21:29:04 -- target/host_management.sh@64 -- # return 0 00:13:41.532 21:29:04 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:41.532 21:29:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.532 21:29:04 -- common/autotest_common.sh@10 -- # set +x 00:13:41.532 [2024-04-24 21:29:04.266954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267071] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267129] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267164] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267181] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267269] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267304] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267330] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267339] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.532 [2024-04-24 21:29:04.267434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.533 [2024-04-24 21:29:04.267443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.533 [2024-04-24 21:29:04.267456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.533 [2024-04-24 21:29:04.267465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.533 [2024-04-24 21:29:04.267473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.533 [2024-04-24 21:29:04.267482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.533 [2024-04-24 21:29:04.267490] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.533 [2024-04-24 21:29:04.267499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.533 [2024-04-24 21:29:04.267507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.533 [2024-04-24 21:29:04.267516] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.533 [2024-04-24 21:29:04.267524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.533 [2024-04-24 21:29:04.267533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.533 [2024-04-24 21:29:04.267541] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555a90 is same with the state(5) to be set 00:13:41.533 [2024-04-24 21:29:04.268212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.533 [2024-04-24 21:29:04.268950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.533 [2024-04-24 21:29:04.268961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.268973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.268983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.268994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.534 [2024-04-24 21:29:04.269598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.269608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861800 is same with the state(5) to be set 00:13:41.534 [2024-04-24 21:29:04.269664] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1861800 was disconnected and freed. reset controller. 00:13:41.534 [2024-04-24 21:29:04.270570] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:41.534 21:29:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.534 task offset: 73728 on job bdev=Nvme0n1 fails 00:13:41.534 00:13:41.534 Latency(us) 00:13:41.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.534 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:41.534 Job: Nvme0n1 ended in about 0.57 seconds with error 00:13:41.534 Verification LBA range: start 0x0 length 0x400 00:13:41.534 Nvme0n1 : 0.57 1012.75 63.30 112.53 0.00 55907.41 11062.48 58300.83 00:13:41.534 =================================================================================================================== 00:13:41.534 Total : 1012.75 63.30 112.53 0.00 55907.41 11062.48 58300.83 00:13:41.534 [2024-04-24 21:29:04.272124] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:41.534 [2024-04-24 21:29:04.272143] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1450b30 (9): Bad file descriptor 00:13:41.534 21:29:04 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:41.534 21:29:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.534 21:29:04 -- common/autotest_common.sh@10 -- # set +x 00:13:41.534 [2024-04-24 21:29:04.276671] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:13:41.534 [2024-04-24 21:29:04.276813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:41.534 [2024-04-24 21:29:04.276843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.534 [2024-04-24 21:29:04.276859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:13:41.534 [2024-04-24 21:29:04.276870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:13:41.534 [2024-04-24 21:29:04.276881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:13:41.535 [2024-04-24 21:29:04.276891] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1450b30 00:13:41.535 [2024-04-24 21:29:04.276914] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1450b30 (9): Bad file descriptor 00:13:41.535 [2024-04-24 21:29:04.276930] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:13:41.535 [2024-04-24 21:29:04.276940] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:13:41.535 [2024-04-24 21:29:04.276951] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:13:41.535 [2024-04-24 21:29:04.276967] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:13:41.535 21:29:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.535 21:29:04 -- target/host_management.sh@87 -- # sleep 1 00:13:42.470 21:29:05 -- target/host_management.sh@91 -- # kill -9 2806176 00:13:42.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2806176) - No such process 00:13:42.470 21:29:05 -- target/host_management.sh@91 -- # true 00:13:42.470 21:29:05 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:42.470 21:29:05 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:42.470 21:29:05 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:42.470 21:29:05 -- nvmf/common.sh@521 -- # config=() 00:13:42.470 21:29:05 -- nvmf/common.sh@521 -- # local subsystem config 00:13:42.470 21:29:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:42.470 21:29:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:42.470 { 00:13:42.470 "params": { 00:13:42.470 "name": "Nvme$subsystem", 00:13:42.470 "trtype": "$TEST_TRANSPORT", 00:13:42.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:42.470 "adrfam": "ipv4", 00:13:42.470 "trsvcid": "$NVMF_PORT", 00:13:42.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:42.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:42.470 "hdgst": ${hdgst:-false}, 00:13:42.470 "ddgst": ${ddgst:-false} 00:13:42.470 }, 00:13:42.470 "method": "bdev_nvme_attach_controller" 00:13:42.470 } 00:13:42.470 EOF 00:13:42.470 )") 00:13:42.470 21:29:05 -- nvmf/common.sh@543 -- # cat 00:13:42.470 21:29:05 -- nvmf/common.sh@545 -- # jq . 00:13:42.470 21:29:05 -- nvmf/common.sh@546 -- # IFS=, 00:13:42.470 21:29:05 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:42.470 "params": { 00:13:42.470 "name": "Nvme0", 00:13:42.470 "trtype": "tcp", 00:13:42.470 "traddr": "10.0.0.2", 00:13:42.470 "adrfam": "ipv4", 00:13:42.470 "trsvcid": "4420", 00:13:42.470 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:42.470 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:42.470 "hdgst": false, 00:13:42.470 "ddgst": false 00:13:42.470 }, 00:13:42.470 "method": "bdev_nvme_attach_controller" 00:13:42.470 }' 00:13:42.470 [2024-04-24 21:29:05.339819] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:13:42.470 [2024-04-24 21:29:05.339874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2806636 ] 00:13:42.729 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.729 [2024-04-24 21:29:05.411238] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.729 [2024-04-24 21:29:05.477411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.988 Running I/O for 1 seconds... 00:13:43.923 00:13:43.923 Latency(us) 00:13:43.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.923 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:43.923 Verification LBA range: start 0x0 length 0x400 00:13:43.923 Nvme0n1 : 1.01 1199.79 74.99 0.00 0.00 52598.70 10118.76 62495.13 00:13:43.923 =================================================================================================================== 00:13:43.923 Total : 1199.79 74.99 0.00 0.00 52598.70 10118.76 62495.13 00:13:44.181 21:29:06 -- target/host_management.sh@102 -- # stoptarget 00:13:44.181 21:29:06 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:44.181 21:29:06 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:44.181 21:29:06 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:44.181 21:29:06 -- target/host_management.sh@40 -- # nvmftestfini 00:13:44.181 21:29:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:44.181 21:29:06 -- nvmf/common.sh@117 -- # sync 00:13:44.181 21:29:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:44.181 21:29:06 -- nvmf/common.sh@120 -- # set +e 00:13:44.181 21:29:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:44.181 21:29:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:44.181 rmmod nvme_tcp 00:13:44.181 rmmod nvme_fabrics 00:13:44.181 rmmod nvme_keyring 00:13:44.181 21:29:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:44.181 21:29:07 -- nvmf/common.sh@124 -- # set -e 00:13:44.181 21:29:07 -- nvmf/common.sh@125 -- # return 0 00:13:44.181 21:29:07 -- nvmf/common.sh@478 -- # '[' -n 2806104 ']' 00:13:44.181 21:29:07 -- nvmf/common.sh@479 -- # killprocess 2806104 00:13:44.181 21:29:07 -- common/autotest_common.sh@936 -- # '[' -z 2806104 ']' 00:13:44.182 21:29:07 -- common/autotest_common.sh@940 -- # kill -0 2806104 00:13:44.182 21:29:07 -- common/autotest_common.sh@941 -- # uname 00:13:44.182 21:29:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:44.182 21:29:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2806104 00:13:44.439 21:29:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:44.439 21:29:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:44.439 21:29:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2806104' 00:13:44.439 killing process with pid 2806104 00:13:44.439 21:29:07 -- common/autotest_common.sh@955 -- # kill 2806104 00:13:44.439 21:29:07 -- common/autotest_common.sh@960 -- # wait 2806104 00:13:44.439 [2024-04-24 21:29:07.305526] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:44.698 21:29:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:44.698 21:29:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:44.698 21:29:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:44.698 21:29:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.698 21:29:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:44.698 21:29:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.698 21:29:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.698 21:29:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.601 21:29:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:46.601 00:13:46.601 real 0m7.092s 00:13:46.601 user 0m21.537s 00:13:46.601 sys 0m1.359s 00:13:46.601 21:29:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:46.601 21:29:09 -- common/autotest_common.sh@10 -- # set +x 00:13:46.601 ************************************ 00:13:46.601 END TEST nvmf_host_management 00:13:46.601 ************************************ 00:13:46.601 21:29:09 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:46.601 00:13:46.601 real 0m13.953s 00:13:46.601 user 0m23.450s 00:13:46.601 sys 0m6.341s 00:13:46.601 21:29:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:46.601 21:29:09 -- common/autotest_common.sh@10 -- # set +x 00:13:46.601 ************************************ 00:13:46.601 END TEST nvmf_host_management 00:13:46.601 ************************************ 00:13:46.860 21:29:09 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:46.860 21:29:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:46.860 21:29:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:46.860 21:29:09 -- common/autotest_common.sh@10 -- # set +x 00:13:46.860 ************************************ 00:13:46.860 START TEST nvmf_lvol 00:13:46.860 ************************************ 00:13:46.860 21:29:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:47.119 * Looking for test storage... 00:13:47.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:47.119 21:29:09 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.119 21:29:09 -- nvmf/common.sh@7 -- # uname -s 00:13:47.119 21:29:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.119 21:29:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.119 21:29:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.119 21:29:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.119 21:29:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.119 21:29:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.119 21:29:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.119 21:29:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.119 21:29:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.119 21:29:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.119 21:29:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:47.119 21:29:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:47.119 21:29:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.119 21:29:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.119 21:29:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.119 21:29:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:47.119 21:29:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:47.119 21:29:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.119 21:29:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.119 21:29:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.119 21:29:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.119 21:29:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.119 21:29:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.119 21:29:09 -- paths/export.sh@5 -- # export PATH 00:13:47.119 21:29:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.119 21:29:09 -- nvmf/common.sh@47 -- # : 0 00:13:47.119 21:29:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:47.119 21:29:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:47.119 21:29:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:47.119 21:29:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.119 21:29:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.119 21:29:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:47.119 21:29:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:47.119 21:29:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:47.119 21:29:09 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:47.119 21:29:09 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:47.119 21:29:09 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:47.119 21:29:09 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:47.119 21:29:09 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.119 21:29:09 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:47.119 21:29:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:47.119 21:29:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.119 21:29:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:47.119 21:29:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:47.119 21:29:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:47.119 21:29:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.119 21:29:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.119 21:29:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.119 21:29:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:47.119 21:29:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:47.119 21:29:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:47.119 21:29:09 -- common/autotest_common.sh@10 -- # set +x 00:13:53.686 21:29:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:53.686 21:29:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:53.686 21:29:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:53.686 21:29:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:53.686 21:29:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:53.686 21:29:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:53.686 21:29:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:53.686 21:29:16 -- nvmf/common.sh@295 -- # net_devs=() 00:13:53.686 21:29:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:53.686 21:29:16 -- nvmf/common.sh@296 -- # e810=() 00:13:53.686 21:29:16 -- nvmf/common.sh@296 -- # local -ga e810 00:13:53.686 21:29:16 -- nvmf/common.sh@297 -- # x722=() 00:13:53.686 21:29:16 -- nvmf/common.sh@297 -- # local -ga x722 00:13:53.686 21:29:16 -- nvmf/common.sh@298 -- # mlx=() 00:13:53.686 21:29:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:53.686 21:29:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.686 21:29:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.686 21:29:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.686 21:29:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.686 21:29:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.686 21:29:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.686 21:29:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.686 21:29:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.686 21:29:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.687 21:29:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.687 21:29:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.687 21:29:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:53.687 21:29:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:53.687 21:29:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:53.687 21:29:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:53.687 21:29:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:53.687 21:29:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:53.687 21:29:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.687 21:29:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:53.687 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:53.687 21:29:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.687 21:29:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.687 21:29:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.687 21:29:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.687 21:29:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.687 21:29:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.687 21:29:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:53.687 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:53.687 21:29:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.687 21:29:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.687 21:29:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.687 21:29:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.687 21:29:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.687 21:29:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:53.687 21:29:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:53.687 21:29:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:53.687 21:29:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.687 21:29:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.687 21:29:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:53.687 21:29:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.687 21:29:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:53.687 Found net devices under 0000:af:00.0: cvl_0_0 00:13:53.687 21:29:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.687 21:29:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.687 21:29:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.687 21:29:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:53.687 21:29:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.687 21:29:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:53.687 Found net devices under 0000:af:00.1: cvl_0_1 00:13:53.687 21:29:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.687 21:29:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:53.687 21:29:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:53.687 21:29:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:53.687 21:29:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:53.687 21:29:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:53.687 21:29:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.687 21:29:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.687 21:29:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.687 21:29:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:53.687 21:29:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.687 21:29:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.687 21:29:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:53.687 21:29:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.687 21:29:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.687 21:29:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:53.687 21:29:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:53.687 21:29:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.687 21:29:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.946 21:29:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.946 21:29:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.946 21:29:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:53.946 21:29:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.946 21:29:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.946 21:29:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.946 21:29:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:53.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:13:53.946 00:13:53.946 --- 10.0.0.2 ping statistics --- 00:13:53.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.946 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:13:53.946 21:29:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:13:53.946 00:13:53.946 --- 10.0.0.1 ping statistics --- 00:13:53.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.946 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:13:53.946 21:29:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.946 21:29:16 -- nvmf/common.sh@411 -- # return 0 00:13:53.946 21:29:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:53.946 21:29:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.946 21:29:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:53.946 21:29:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:53.946 21:29:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.946 21:29:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:53.946 21:29:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:53.946 21:29:16 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:53.946 21:29:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:53.946 21:29:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:53.946 21:29:16 -- common/autotest_common.sh@10 -- # set +x 00:13:53.946 21:29:16 -- nvmf/common.sh@470 -- # nvmfpid=2810690 00:13:53.946 21:29:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:53.946 21:29:16 -- nvmf/common.sh@471 -- # waitforlisten 2810690 00:13:53.946 21:29:16 -- common/autotest_common.sh@817 -- # '[' -z 2810690 ']' 00:13:53.946 21:29:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.946 21:29:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:53.946 21:29:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.946 21:29:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:53.946 21:29:16 -- common/autotest_common.sh@10 -- # set +x 00:13:54.205 [2024-04-24 21:29:16.860832] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:13:54.205 [2024-04-24 21:29:16.860880] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.205 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.205 [2024-04-24 21:29:16.932563] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:54.205 [2024-04-24 21:29:17.004658] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.205 [2024-04-24 21:29:17.004697] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.205 [2024-04-24 21:29:17.004706] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.205 [2024-04-24 21:29:17.004714] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.205 [2024-04-24 21:29:17.004721] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.205 [2024-04-24 21:29:17.004767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.205 [2024-04-24 21:29:17.004863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.205 [2024-04-24 21:29:17.004863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.140 21:29:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:55.140 21:29:17 -- common/autotest_common.sh@850 -- # return 0 00:13:55.140 21:29:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:55.140 21:29:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:55.140 21:29:17 -- common/autotest_common.sh@10 -- # set +x 00:13:55.140 21:29:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.140 21:29:17 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:55.140 [2024-04-24 21:29:17.873075] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.140 21:29:17 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:55.399 21:29:18 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:55.399 21:29:18 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:55.399 21:29:18 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:55.399 21:29:18 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:55.657 21:29:18 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:55.916 21:29:18 -- target/nvmf_lvol.sh@29 -- # lvs=4a89a212-db75-41d2-98f5-32b62d7376f3 00:13:55.916 21:29:18 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4a89a212-db75-41d2-98f5-32b62d7376f3 lvol 20 00:13:56.174 21:29:18 -- target/nvmf_lvol.sh@32 -- # lvol=e044bb13-39ec-4cf9-ac10-6f6e3f545d09 00:13:56.174 21:29:18 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:56.174 21:29:18 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e044bb13-39ec-4cf9-ac10-6f6e3f545d09 00:13:56.432 21:29:19 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:56.690 [2024-04-24 21:29:19.334393] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.690 21:29:19 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:56.690 21:29:19 -- target/nvmf_lvol.sh@42 -- # perf_pid=2811234 00:13:56.690 21:29:19 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:56.690 21:29:19 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:56.949 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.884 21:29:20 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e044bb13-39ec-4cf9-ac10-6f6e3f545d09 MY_SNAPSHOT 00:13:57.884 21:29:20 -- target/nvmf_lvol.sh@47 -- # snapshot=8203ed4b-2e51-43f0-a0ea-20f742f4cc51 00:13:57.884 21:29:20 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e044bb13-39ec-4cf9-ac10-6f6e3f545d09 30 00:13:58.142 21:29:20 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8203ed4b-2e51-43f0-a0ea-20f742f4cc51 MY_CLONE 00:13:58.402 21:29:21 -- target/nvmf_lvol.sh@49 -- # clone=d4c78f16-39bd-4ca6-8ed5-befb3f91cb96 00:13:58.402 21:29:21 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d4c78f16-39bd-4ca6-8ed5-befb3f91cb96 00:13:58.662 21:29:21 -- target/nvmf_lvol.sh@53 -- # wait 2811234 00:14:08.636 Initializing NVMe Controllers 00:14:08.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:08.636 Controller IO queue size 128, less than required. 00:14:08.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:08.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:08.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:08.636 Initialization complete. Launching workers. 00:14:08.636 ======================================================== 00:14:08.636 Latency(us) 00:14:08.636 Device Information : IOPS MiB/s Average min max 00:14:08.636 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12151.70 47.47 10538.18 2004.70 62643.75 00:14:08.636 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11434.30 44.67 11199.13 3680.21 42054.50 00:14:08.636 ======================================================== 00:14:08.636 Total : 23586.00 92.13 10858.60 2004.70 62643.75 00:14:08.636 00:14:08.636 21:29:30 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:08.637 21:29:30 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e044bb13-39ec-4cf9-ac10-6f6e3f545d09 00:14:08.637 21:29:30 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4a89a212-db75-41d2-98f5-32b62d7376f3 00:14:08.637 21:29:30 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:08.637 21:29:30 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:08.637 21:29:30 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:08.637 21:29:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:08.637 21:29:30 -- nvmf/common.sh@117 -- # sync 00:14:08.637 21:29:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:08.637 21:29:30 -- nvmf/common.sh@120 -- # set +e 00:14:08.637 21:29:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:08.637 21:29:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:08.637 rmmod nvme_tcp 00:14:08.637 rmmod nvme_fabrics 00:14:08.637 rmmod nvme_keyring 00:14:08.637 21:29:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:08.637 21:29:30 -- nvmf/common.sh@124 -- # set -e 00:14:08.637 21:29:30 -- nvmf/common.sh@125 -- # return 0 00:14:08.637 21:29:30 -- nvmf/common.sh@478 -- # '[' -n 2810690 ']' 00:14:08.637 21:29:30 -- nvmf/common.sh@479 -- # killprocess 2810690 00:14:08.637 21:29:30 -- common/autotest_common.sh@936 -- # '[' -z 2810690 ']' 00:14:08.637 21:29:30 -- common/autotest_common.sh@940 -- # kill -0 2810690 00:14:08.637 21:29:30 -- common/autotest_common.sh@941 -- # uname 00:14:08.637 21:29:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:08.637 21:29:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2810690 00:14:08.637 21:29:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:08.637 21:29:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:08.637 21:29:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2810690' 00:14:08.637 killing process with pid 2810690 00:14:08.637 21:29:30 -- common/autotest_common.sh@955 -- # kill 2810690 00:14:08.637 21:29:30 -- common/autotest_common.sh@960 -- # wait 2810690 00:14:08.637 21:29:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:08.637 21:29:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:08.637 21:29:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:08.637 21:29:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:08.637 21:29:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:08.637 21:29:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.637 21:29:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:08.637 21:29:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.541 21:29:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:10.541 00:14:10.541 real 0m23.386s 00:14:10.542 user 1m2.732s 00:14:10.542 sys 0m10.248s 00:14:10.542 21:29:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:10.542 21:29:33 -- common/autotest_common.sh@10 -- # set +x 00:14:10.542 ************************************ 00:14:10.542 END TEST nvmf_lvol 00:14:10.542 ************************************ 00:14:10.542 21:29:33 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:10.542 21:29:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:10.542 21:29:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:10.542 21:29:33 -- common/autotest_common.sh@10 -- # set +x 00:14:10.542 ************************************ 00:14:10.542 START TEST nvmf_lvs_grow 00:14:10.542 ************************************ 00:14:10.542 21:29:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:10.542 * Looking for test storage... 00:14:10.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.542 21:29:33 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.542 21:29:33 -- nvmf/common.sh@7 -- # uname -s 00:14:10.542 21:29:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.542 21:29:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.542 21:29:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.542 21:29:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.542 21:29:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.542 21:29:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.542 21:29:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.542 21:29:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.542 21:29:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.542 21:29:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.542 21:29:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:10.542 21:29:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:10.542 21:29:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.542 21:29:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.542 21:29:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.542 21:29:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.542 21:29:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.542 21:29:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.542 21:29:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.542 21:29:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.542 21:29:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.542 21:29:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.542 21:29:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.542 21:29:33 -- paths/export.sh@5 -- # export PATH 00:14:10.542 21:29:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.542 21:29:33 -- nvmf/common.sh@47 -- # : 0 00:14:10.542 21:29:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:10.542 21:29:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:10.542 21:29:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.542 21:29:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.542 21:29:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.542 21:29:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:10.542 21:29:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:10.542 21:29:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:10.542 21:29:33 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.542 21:29:33 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:10.542 21:29:33 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:10.542 21:29:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:10.542 21:29:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.542 21:29:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:10.542 21:29:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:10.542 21:29:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:10.542 21:29:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.542 21:29:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.542 21:29:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.542 21:29:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:10.542 21:29:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:10.542 21:29:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:10.542 21:29:33 -- common/autotest_common.sh@10 -- # set +x 00:14:17.122 21:29:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:17.122 21:29:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:17.122 21:29:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:17.122 21:29:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:17.122 21:29:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:17.122 21:29:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:17.122 21:29:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:17.122 21:29:39 -- nvmf/common.sh@295 -- # net_devs=() 00:14:17.122 21:29:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:17.122 21:29:39 -- nvmf/common.sh@296 -- # e810=() 00:14:17.122 21:29:39 -- nvmf/common.sh@296 -- # local -ga e810 00:14:17.122 21:29:39 -- nvmf/common.sh@297 -- # x722=() 00:14:17.122 21:29:39 -- nvmf/common.sh@297 -- # local -ga x722 00:14:17.122 21:29:39 -- nvmf/common.sh@298 -- # mlx=() 00:14:17.122 21:29:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:17.122 21:29:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.122 21:29:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.122 21:29:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.122 21:29:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.122 21:29:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.122 21:29:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.122 21:29:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.122 21:29:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.122 21:29:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.122 21:29:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.122 21:29:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.122 21:29:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:17.122 21:29:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:17.122 21:29:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:17.122 21:29:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:17.122 21:29:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:17.122 21:29:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:17.122 21:29:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.122 21:29:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:17.122 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:17.122 21:29:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.122 21:29:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.122 21:29:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.122 21:29:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.122 21:29:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.122 21:29:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.122 21:29:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:17.122 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:17.122 21:29:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.122 21:29:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.122 21:29:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.122 21:29:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.122 21:29:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.122 21:29:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:17.122 21:29:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:17.122 21:29:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:17.122 21:29:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.122 21:29:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.122 21:29:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:17.122 21:29:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.122 21:29:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:17.122 Found net devices under 0000:af:00.0: cvl_0_0 00:14:17.122 21:29:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.122 21:29:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.122 21:29:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.122 21:29:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:17.122 21:29:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.122 21:29:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:17.122 Found net devices under 0000:af:00.1: cvl_0_1 00:14:17.122 21:29:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.122 21:29:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:17.122 21:29:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:17.122 21:29:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:17.122 21:29:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:17.122 21:29:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:17.122 21:29:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.122 21:29:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.122 21:29:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.122 21:29:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:17.122 21:29:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.122 21:29:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.122 21:29:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:17.122 21:29:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.122 21:29:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.122 21:29:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:17.122 21:29:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:17.122 21:29:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.122 21:29:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.122 21:29:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.122 21:29:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.122 21:29:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:17.122 21:29:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.381 21:29:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.381 21:29:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.381 21:29:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:17.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:14:17.381 00:14:17.381 --- 10.0.0.2 ping statistics --- 00:14:17.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.381 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:14:17.381 21:29:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:14:17.381 00:14:17.381 --- 10.0.0.1 ping statistics --- 00:14:17.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.382 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:14:17.382 21:29:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.382 21:29:40 -- nvmf/common.sh@411 -- # return 0 00:14:17.382 21:29:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:17.382 21:29:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.382 21:29:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:17.382 21:29:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:17.382 21:29:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.382 21:29:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:17.382 21:29:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:17.382 21:29:40 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:17.382 21:29:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:17.382 21:29:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:17.382 21:29:40 -- common/autotest_common.sh@10 -- # set +x 00:14:17.382 21:29:40 -- nvmf/common.sh@470 -- # nvmfpid=2816814 00:14:17.382 21:29:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:17.382 21:29:40 -- nvmf/common.sh@471 -- # waitforlisten 2816814 00:14:17.382 21:29:40 -- common/autotest_common.sh@817 -- # '[' -z 2816814 ']' 00:14:17.382 21:29:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.382 21:29:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:17.382 21:29:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.382 21:29:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:17.382 21:29:40 -- common/autotest_common.sh@10 -- # set +x 00:14:17.382 [2024-04-24 21:29:40.173214] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:14:17.382 [2024-04-24 21:29:40.173260] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.382 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.382 [2024-04-24 21:29:40.246017] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.641 [2024-04-24 21:29:40.319033] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.641 [2024-04-24 21:29:40.319069] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.641 [2024-04-24 21:29:40.319079] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.641 [2024-04-24 21:29:40.319088] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.641 [2024-04-24 21:29:40.319095] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.641 [2024-04-24 21:29:40.319122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.209 21:29:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:18.209 21:29:40 -- common/autotest_common.sh@850 -- # return 0 00:14:18.209 21:29:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:18.209 21:29:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:18.209 21:29:40 -- common/autotest_common.sh@10 -- # set +x 00:14:18.209 21:29:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.209 21:29:41 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:18.468 [2024-04-24 21:29:41.170039] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.468 21:29:41 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:18.468 21:29:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:18.468 21:29:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:18.468 21:29:41 -- common/autotest_common.sh@10 -- # set +x 00:14:18.468 ************************************ 00:14:18.468 START TEST lvs_grow_clean 00:14:18.468 ************************************ 00:14:18.468 21:29:41 -- common/autotest_common.sh@1111 -- # lvs_grow 00:14:18.468 21:29:41 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:18.468 21:29:41 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:18.468 21:29:41 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:18.468 21:29:41 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:18.468 21:29:41 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:18.468 21:29:41 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:18.468 21:29:41 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:18.468 21:29:41 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:18.468 21:29:41 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:18.727 21:29:41 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:18.727 21:29:41 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:18.986 21:29:41 -- target/nvmf_lvs_grow.sh@28 -- # lvs=727c355b-b4dd-4938-a4dc-b73b3c364e8c 00:14:18.986 21:29:41 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 727c355b-b4dd-4938-a4dc-b73b3c364e8c 00:14:18.986 21:29:41 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:19.246 21:29:41 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:19.246 21:29:41 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:19.246 21:29:41 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 727c355b-b4dd-4938-a4dc-b73b3c364e8c lvol 150 00:14:19.246 21:29:42 -- target/nvmf_lvs_grow.sh@33 -- # lvol=a0966a60-281e-4539-9054-a022c915082b 00:14:19.246 21:29:42 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:19.246 21:29:42 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:19.505 [2024-04-24 21:29:42.233715] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:19.505 [2024-04-24 21:29:42.233764] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:19.505 true 00:14:19.505 21:29:42 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 727c355b-b4dd-4938-a4dc-b73b3c364e8c 00:14:19.505 21:29:42 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:19.764 21:29:42 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:19.764 21:29:42 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:19.764 21:29:42 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a0966a60-281e-4539-9054-a022c915082b 00:14:20.023 21:29:42 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:20.023 [2024-04-24 21:29:42.891703] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.023 21:29:42 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:20.282 21:29:43 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2817377 00:14:20.282 21:29:43 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:20.282 21:29:43 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:20.282 21:29:43 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2817377 /var/tmp/bdevperf.sock 00:14:20.282 21:29:43 -- common/autotest_common.sh@817 -- # '[' -z 2817377 ']' 00:14:20.282 21:29:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:20.282 21:29:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:20.282 21:29:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:20.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:20.283 21:29:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:20.283 21:29:43 -- common/autotest_common.sh@10 -- # set +x 00:14:20.283 [2024-04-24 21:29:43.112019] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:14:20.283 [2024-04-24 21:29:43.112071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817377 ] 00:14:20.283 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.542 [2024-04-24 21:29:43.181532] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.542 [2024-04-24 21:29:43.253027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.110 21:29:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:21.110 21:29:43 -- common/autotest_common.sh@850 -- # return 0 00:14:21.110 21:29:43 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:21.679 Nvme0n1 00:14:21.679 21:29:44 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:21.679 [ 00:14:21.679 { 00:14:21.679 "name": "Nvme0n1", 00:14:21.679 "aliases": [ 00:14:21.679 "a0966a60-281e-4539-9054-a022c915082b" 00:14:21.679 ], 00:14:21.679 "product_name": "NVMe disk", 00:14:21.679 "block_size": 4096, 00:14:21.679 "num_blocks": 38912, 00:14:21.679 "uuid": "a0966a60-281e-4539-9054-a022c915082b", 00:14:21.679 "assigned_rate_limits": { 00:14:21.679 "rw_ios_per_sec": 0, 00:14:21.679 "rw_mbytes_per_sec": 0, 00:14:21.679 "r_mbytes_per_sec": 0, 00:14:21.679 "w_mbytes_per_sec": 0 00:14:21.679 }, 00:14:21.679 "claimed": false, 00:14:21.679 "zoned": false, 00:14:21.679 "supported_io_types": { 00:14:21.679 "read": true, 00:14:21.679 "write": true, 00:14:21.679 "unmap": true, 00:14:21.679 "write_zeroes": true, 00:14:21.679 "flush": true, 00:14:21.679 "reset": true, 00:14:21.679 "compare": true, 00:14:21.679 "compare_and_write": true, 00:14:21.679 "abort": true, 00:14:21.679 "nvme_admin": true, 00:14:21.679 "nvme_io": true 00:14:21.679 }, 00:14:21.679 "memory_domains": [ 00:14:21.679 { 00:14:21.679 "dma_device_id": "system", 00:14:21.679 "dma_device_type": 1 00:14:21.679 } 00:14:21.679 ], 00:14:21.679 "driver_specific": { 00:14:21.679 "nvme": [ 00:14:21.679 { 00:14:21.679 "trid": { 00:14:21.679 "trtype": "TCP", 00:14:21.679 "adrfam": "IPv4", 00:14:21.679 "traddr": "10.0.0.2", 00:14:21.679 "trsvcid": "4420", 00:14:21.679 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:21.679 }, 00:14:21.679 "ctrlr_data": { 00:14:21.679 "cntlid": 1, 00:14:21.679 "vendor_id": "0x8086", 00:14:21.679 "model_number": "SPDK bdev Controller", 00:14:21.679 "serial_number": "SPDK0", 00:14:21.679 "firmware_revision": "24.05", 00:14:21.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:21.679 "oacs": { 00:14:21.679 "security": 0, 00:14:21.679 "format": 0, 00:14:21.679 "firmware": 0, 00:14:21.679 "ns_manage": 0 00:14:21.679 }, 00:14:21.679 "multi_ctrlr": true, 00:14:21.679 "ana_reporting": false 00:14:21.679 }, 00:14:21.679 "vs": { 00:14:21.679 "nvme_version": "1.3" 00:14:21.679 }, 00:14:21.679 "ns_data": { 00:14:21.679 "id": 1, 00:14:21.679 "can_share": true 00:14:21.679 } 00:14:21.679 } 00:14:21.679 ], 00:14:21.679 "mp_policy": "active_passive" 00:14:21.679 } 00:14:21.679 } 00:14:21.679 ] 00:14:21.679 21:29:44 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2817571 00:14:21.679 21:29:44 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:21.679 21:29:44 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:21.679 Running I/O for 10 seconds... 00:14:23.060 Latency(us) 00:14:23.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.060 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.060 Nvme0n1 : 1.00 22912.00 89.50 0.00 0.00 0.00 0.00 0.00 00:14:23.060 =================================================================================================================== 00:14:23.060 Total : 22912.00 89.50 0.00 0.00 0.00 0.00 0.00 00:14:23.060 00:14:23.627 21:29:46 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 727c355b-b4dd-4938-a4dc-b73b3c364e8c 00:14:23.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.886 Nvme0n1 : 2.00 23232.00 90.75 0.00 0.00 0.00 0.00 0.00 00:14:23.886 =================================================================================================================== 00:14:23.886 Total : 23232.00 90.75 0.00 0.00 0.00 0.00 0.00 00:14:23.886 00:14:23.886 true 00:14:23.886 21:29:46 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 727c355b-b4dd-4938-a4dc-b73b3c364e8c 00:14:23.886 21:29:46 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:24.145 21:29:46 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:24.145 21:29:46 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:24.145 21:29:46 -- target/nvmf_lvs_grow.sh@65 -- # wait 2817571 00:14:24.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.711 Nvme0n1 : 3.00 23338.67 91.17 0.00 0.00 0.00 0.00 0.00 00:14:24.711 =================================================================================================================== 00:14:24.711 Total : 23338.67 91.17 0.00 0.00 0.00 0.00 0.00 00:14:24.711 00:14:26.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.121 Nvme0n1 : 4.00 23343.50 91.19 0.00 0.00 0.00 0.00 0.00 00:14:26.121 =================================================================================================================== 00:14:26.121 Total : 23343.50 91.19 0.00 0.00 0.00 0.00 0.00 00:14:26.121 00:14:27.054 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.054 Nvme0n1 : 5.00 23436.40 91.55 0.00 0.00 0.00 0.00 0.00 00:14:27.054 =================================================================================================================== 00:14:27.054 Total : 23436.40 91.55 0.00 0.00 0.00 0.00 0.00 00:14:27.054 00:14:27.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.988 Nvme0n1 : 6.00 23303.83 91.03 0.00 0.00 0.00 0.00 0.00 00:14:27.988 =================================================================================================================== 00:14:27.988 Total : 23303.83 91.03 0.00 0.00 0.00 0.00 0.00 00:14:27.988 00:14:28.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.922 Nvme0n1 : 7.00 23269.14 90.90 0.00 0.00 0.00 0.00 0.00 00:14:28.922 =================================================================================================================== 00:14:28.922 Total : 23269.14 90.90 0.00 0.00 0.00 0.00 0.00 00:14:28.922 00:14:29.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.857 Nvme0n1 : 8.00 23179.00 90.54 0.00 0.00 0.00 0.00 0.00 00:14:29.857 =================================================================================================================== 00:14:29.857 Total : 23179.00 90.54 0.00 0.00 0.00 0.00 0.00 00:14:29.857 00:14:30.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.792 Nvme0n1 : 9.00 23163.78 90.48 0.00 0.00 0.00 0.00 0.00 00:14:30.793 =================================================================================================================== 00:14:30.793 Total : 23163.78 90.48 0.00 0.00 0.00 0.00 0.00 00:14:30.793 00:14:31.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.729 Nvme0n1 : 10.00 23150.60 90.43 0.00 0.00 0.00 0.00 0.00 00:14:31.729 =================================================================================================================== 00:14:31.729 Total : 23150.60 90.43 0.00 0.00 0.00 0.00 0.00 00:14:31.729 00:14:31.729 00:14:31.729 Latency(us) 00:14:31.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.729 Nvme0n1 : 10.01 23149.99 90.43 0.00 0.00 5525.54 2936.01 26214.40 00:14:31.729 =================================================================================================================== 00:14:31.729 Total : 23149.99 90.43 0.00 0.00 5525.54 2936.01 26214.40 00:14:31.729 0 00:14:31.987 21:29:54 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2817377 00:14:31.987 21:29:54 -- common/autotest_common.sh@936 -- # '[' -z 2817377 ']' 00:14:31.988 21:29:54 -- common/autotest_common.sh@940 -- # kill -0 2817377 00:14:31.988 21:29:54 -- common/autotest_common.sh@941 -- # uname 00:14:31.988 21:29:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:31.988 21:29:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2817377 00:14:31.988 21:29:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:31.988 21:29:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:31.988 21:29:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2817377' 00:14:31.988 killing process with pid 2817377 00:14:31.988 21:29:54 -- common/autotest_common.sh@955 -- # kill 2817377 00:14:31.988 Received shutdown signal, test time was about 10.000000 seconds 00:14:31.988 00:14:31.988 Latency(us) 00:14:31.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.988 =================================================================================================================== 00:14:31.988 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:31.988 21:29:54 -- common/autotest_common.sh@960 -- # wait 2817377 00:14:31.988 21:29:54 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:32.246 21:29:55 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 727c355b-b4dd-4938-a4dc-b73b3c364e8c 00:14:32.246 21:29:55 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:32.504 21:29:55 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:32.504 21:29:55 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:32.504 21:29:55 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:32.504 [2024-04-24 21:29:55.388015] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:32.763 21:29:55 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 727c355b-b4dd-4938-a4dc-b73b3c364e8c 00:14:32.763 21:29:55 -- common/autotest_common.sh@638 -- # local es=0 00:14:32.763 21:29:55 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 727c355b-b4dd-4938-a4dc-b73b3c364e8c 00:14:32.763 21:29:55 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.763 21:29:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.763 21:29:55 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.763 21:29:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.763 21:29:55 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.763 21:29:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.763 21:29:55 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.763 21:29:55 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:32.763 21:29:55 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 727c355b-b4dd-4938-a4dc-b73b3c364e8c 00:14:32.763 request: 00:14:32.763 { 00:14:32.763 "uuid": "727c355b-b4dd-4938-a4dc-b73b3c364e8c", 00:14:32.763 "method": "bdev_lvol_get_lvstores", 00:14:32.763 "req_id": 1 00:14:32.763 } 00:14:32.763 Got JSON-RPC error response 00:14:32.763 response: 00:14:32.763 { 00:14:32.763 "code": -19, 00:14:32.763 "message": "No such device" 00:14:32.763 } 00:14:32.763 21:29:55 -- common/autotest_common.sh@641 -- # es=1 00:14:32.763 21:29:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:32.763 21:29:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:32.763 21:29:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:32.763 21:29:55 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:33.022 aio_bdev 00:14:33.022 21:29:55 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev a0966a60-281e-4539-9054-a022c915082b 00:14:33.022 21:29:55 -- common/autotest_common.sh@885 -- # local bdev_name=a0966a60-281e-4539-9054-a022c915082b 00:14:33.022 21:29:55 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:33.022 21:29:55 -- common/autotest_common.sh@887 -- # local i 00:14:33.022 21:29:55 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:33.022 21:29:55 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:33.022 21:29:55 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:33.281 21:29:55 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a0966a60-281e-4539-9054-a022c915082b -t 2000 00:14:33.281 [ 00:14:33.281 { 00:14:33.281 "name": "a0966a60-281e-4539-9054-a022c915082b", 00:14:33.281 "aliases": [ 00:14:33.281 "lvs/lvol" 00:14:33.281 ], 00:14:33.281 "product_name": "Logical Volume", 00:14:33.281 "block_size": 4096, 00:14:33.281 "num_blocks": 38912, 00:14:33.281 "uuid": "a0966a60-281e-4539-9054-a022c915082b", 00:14:33.281 "assigned_rate_limits": { 00:14:33.281 "rw_ios_per_sec": 0, 00:14:33.281 "rw_mbytes_per_sec": 0, 00:14:33.281 "r_mbytes_per_sec": 0, 00:14:33.281 "w_mbytes_per_sec": 0 00:14:33.281 }, 00:14:33.281 "claimed": false, 00:14:33.281 "zoned": false, 00:14:33.281 "supported_io_types": { 00:14:33.281 "read": true, 00:14:33.281 "write": true, 00:14:33.281 "unmap": true, 00:14:33.281 "write_zeroes": true, 00:14:33.281 "flush": false, 00:14:33.281 "reset": true, 00:14:33.281 "compare": false, 00:14:33.281 "compare_and_write": false, 00:14:33.281 "abort": false, 00:14:33.281 "nvme_admin": false, 00:14:33.281 "nvme_io": false 00:14:33.281 }, 00:14:33.281 "driver_specific": { 00:14:33.281 "lvol": { 00:14:33.281 "lvol_store_uuid": "727c355b-b4dd-4938-a4dc-b73b3c364e8c", 00:14:33.281 "base_bdev": "aio_bdev", 00:14:33.281 "thin_provision": false, 00:14:33.281 "snapshot": false, 00:14:33.281 "clone": false, 00:14:33.281 "esnap_clone": false 00:14:33.281 } 00:14:33.281 } 00:14:33.281 } 00:14:33.281 ] 00:14:33.281 21:29:56 -- common/autotest_common.sh@893 -- # return 0 00:14:33.281 21:29:56 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 727c355b-b4dd-4938-a4dc-b73b3c364e8c 00:14:33.281 21:29:56 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:33.539 21:29:56 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:33.539 21:29:56 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 727c355b-b4dd-4938-a4dc-b73b3c364e8c 00:14:33.539 21:29:56 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:33.798 21:29:56 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:33.798 21:29:56 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a0966a60-281e-4539-9054-a022c915082b 00:14:33.798 21:29:56 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 727c355b-b4dd-4938-a4dc-b73b3c364e8c 00:14:34.056 21:29:56 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:34.315 21:29:56 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:34.315 00:14:34.315 real 0m15.659s 00:14:34.315 user 0m14.719s 00:14:34.315 sys 0m2.042s 00:14:34.315 21:29:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:34.315 21:29:56 -- common/autotest_common.sh@10 -- # set +x 00:14:34.315 ************************************ 00:14:34.315 END TEST lvs_grow_clean 00:14:34.315 ************************************ 00:14:34.315 21:29:57 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:34.315 21:29:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:34.315 21:29:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:34.315 21:29:57 -- common/autotest_common.sh@10 -- # set +x 00:14:34.315 ************************************ 00:14:34.315 START TEST lvs_grow_dirty 00:14:34.315 ************************************ 00:14:34.315 21:29:57 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:14:34.315 21:29:57 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:34.315 21:29:57 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:34.315 21:29:57 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:34.315 21:29:57 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:34.315 21:29:57 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:34.315 21:29:57 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:34.315 21:29:57 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:34.315 21:29:57 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:34.573 21:29:57 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:34.573 21:29:57 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:34.573 21:29:57 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:34.833 21:29:57 -- target/nvmf_lvs_grow.sh@28 -- # lvs=1a684d62-33fd-4414-a5f2-7c7e3f2fee1a 00:14:34.833 21:29:57 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a684d62-33fd-4414-a5f2-7c7e3f2fee1a 00:14:34.833 21:29:57 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:35.092 21:29:57 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:35.092 21:29:57 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:35.092 21:29:57 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1a684d62-33fd-4414-a5f2-7c7e3f2fee1a lvol 150 00:14:35.092 21:29:57 -- target/nvmf_lvs_grow.sh@33 -- # lvol=eef8f488-1b23-46a4-bf10-7f39f8fe85c4 00:14:35.092 21:29:57 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:35.092 21:29:57 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:35.350 [2024-04-24 21:29:58.070889] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:35.350 [2024-04-24 21:29:58.070936] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:35.350 true 00:14:35.350 21:29:58 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a684d62-33fd-4414-a5f2-7c7e3f2fee1a 00:14:35.350 21:29:58 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:35.609 21:29:58 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:35.609 21:29:58 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:35.609 21:29:58 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 eef8f488-1b23-46a4-bf10-7f39f8fe85c4 00:14:35.868 21:29:58 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:36.131 21:29:58 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:36.131 21:29:58 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2820132 00:14:36.131 21:29:58 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:36.131 21:29:58 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:36.131 21:29:58 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2820132 /var/tmp/bdevperf.sock 00:14:36.131 21:29:58 -- common/autotest_common.sh@817 -- # '[' -z 2820132 ']' 00:14:36.131 21:29:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:36.131 21:29:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:36.131 21:29:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:36.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:36.131 21:29:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:36.131 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:14:36.131 [2024-04-24 21:29:58.987717] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:14:36.131 [2024-04-24 21:29:58.987767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2820132 ] 00:14:36.389 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.389 [2024-04-24 21:29:59.057561] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.389 [2024-04-24 21:29:59.126009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.956 21:29:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:36.956 21:29:59 -- common/autotest_common.sh@850 -- # return 0 00:14:36.956 21:29:59 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:37.214 Nvme0n1 00:14:37.214 21:30:00 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:37.473 [ 00:14:37.473 { 00:14:37.473 "name": "Nvme0n1", 00:14:37.473 "aliases": [ 00:14:37.473 "eef8f488-1b23-46a4-bf10-7f39f8fe85c4" 00:14:37.473 ], 00:14:37.473 "product_name": "NVMe disk", 00:14:37.473 "block_size": 4096, 00:14:37.473 "num_blocks": 38912, 00:14:37.473 "uuid": "eef8f488-1b23-46a4-bf10-7f39f8fe85c4", 00:14:37.473 "assigned_rate_limits": { 00:14:37.473 "rw_ios_per_sec": 0, 00:14:37.473 "rw_mbytes_per_sec": 0, 00:14:37.473 "r_mbytes_per_sec": 0, 00:14:37.473 "w_mbytes_per_sec": 0 00:14:37.473 }, 00:14:37.473 "claimed": false, 00:14:37.473 "zoned": false, 00:14:37.473 "supported_io_types": { 00:14:37.473 "read": true, 00:14:37.473 "write": true, 00:14:37.473 "unmap": true, 00:14:37.473 "write_zeroes": true, 00:14:37.473 "flush": true, 00:14:37.473 "reset": true, 00:14:37.473 "compare": true, 00:14:37.473 "compare_and_write": true, 00:14:37.473 "abort": true, 00:14:37.473 "nvme_admin": true, 00:14:37.473 "nvme_io": true 00:14:37.473 }, 00:14:37.473 "memory_domains": [ 00:14:37.473 { 00:14:37.473 "dma_device_id": "system", 00:14:37.473 "dma_device_type": 1 00:14:37.473 } 00:14:37.473 ], 00:14:37.473 "driver_specific": { 00:14:37.473 "nvme": [ 00:14:37.473 { 00:14:37.473 "trid": { 00:14:37.473 "trtype": "TCP", 00:14:37.473 "adrfam": "IPv4", 00:14:37.473 "traddr": "10.0.0.2", 00:14:37.473 "trsvcid": "4420", 00:14:37.473 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:37.473 }, 00:14:37.473 "ctrlr_data": { 00:14:37.473 "cntlid": 1, 00:14:37.473 "vendor_id": "0x8086", 00:14:37.473 "model_number": "SPDK bdev Controller", 00:14:37.473 "serial_number": "SPDK0", 00:14:37.473 "firmware_revision": "24.05", 00:14:37.473 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:37.473 "oacs": { 00:14:37.473 "security": 0, 00:14:37.473 "format": 0, 00:14:37.473 "firmware": 0, 00:14:37.473 "ns_manage": 0 00:14:37.473 }, 00:14:37.473 "multi_ctrlr": true, 00:14:37.473 "ana_reporting": false 00:14:37.473 }, 00:14:37.473 "vs": { 00:14:37.473 "nvme_version": "1.3" 00:14:37.473 }, 00:14:37.473 "ns_data": { 00:14:37.473 "id": 1, 00:14:37.473 "can_share": true 00:14:37.473 } 00:14:37.473 } 00:14:37.473 ], 00:14:37.473 "mp_policy": "active_passive" 00:14:37.473 } 00:14:37.473 } 00:14:37.473 ] 00:14:37.473 21:30:00 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2820361 00:14:37.473 21:30:00 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:37.473 21:30:00 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:37.473 Running I/O for 10 seconds... 00:14:38.410 Latency(us) 00:14:38.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.410 Nvme0n1 : 1.00 22629.00 88.39 0.00 0.00 0.00 0.00 0.00 00:14:38.410 =================================================================================================================== 00:14:38.410 Total : 22629.00 88.39 0.00 0.00 0.00 0.00 0.00 00:14:38.410 00:14:39.345 21:30:02 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1a684d62-33fd-4414-a5f2-7c7e3f2fee1a 00:14:39.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.604 Nvme0n1 : 2.00 22993.50 89.82 0.00 0.00 0.00 0.00 0.00 00:14:39.604 =================================================================================================================== 00:14:39.604 Total : 22993.50 89.82 0.00 0.00 0.00 0.00 0.00 00:14:39.604 00:14:39.604 true 00:14:39.604 21:30:02 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a684d62-33fd-4414-a5f2-7c7e3f2fee1a 00:14:39.604 21:30:02 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:39.862 21:30:02 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:39.862 21:30:02 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:39.862 21:30:02 -- target/nvmf_lvs_grow.sh@65 -- # wait 2820361 00:14:40.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.429 Nvme0n1 : 3.00 22819.67 89.14 0.00 0.00 0.00 0.00 0.00 00:14:40.429 =================================================================================================================== 00:14:40.429 Total : 22819.67 89.14 0.00 0.00 0.00 0.00 0.00 00:14:40.429 00:14:41.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.805 Nvme0n1 : 4.00 22739.25 88.83 0.00 0.00 0.00 0.00 0.00 00:14:41.805 =================================================================================================================== 00:14:41.805 Total : 22739.25 88.83 0.00 0.00 0.00 0.00 0.00 00:14:41.805 00:14:42.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.740 Nvme0n1 : 5.00 22675.60 88.58 0.00 0.00 0.00 0.00 0.00 00:14:42.740 =================================================================================================================== 00:14:42.740 Total : 22675.60 88.58 0.00 0.00 0.00 0.00 0.00 00:14:42.740 00:14:43.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.677 Nvme0n1 : 6.00 22846.67 89.24 0.00 0.00 0.00 0.00 0.00 00:14:43.677 =================================================================================================================== 00:14:43.677 Total : 22846.67 89.24 0.00 0.00 0.00 0.00 0.00 00:14:43.677 00:14:44.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.611 Nvme0n1 : 7.00 22959.14 89.68 0.00 0.00 0.00 0.00 0.00 00:14:44.611 =================================================================================================================== 00:14:44.611 Total : 22959.14 89.68 0.00 0.00 0.00 0.00 0.00 00:14:44.611 00:14:45.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.546 Nvme0n1 : 8.00 22970.88 89.73 0.00 0.00 0.00 0.00 0.00 00:14:45.546 =================================================================================================================== 00:14:45.546 Total : 22970.88 89.73 0.00 0.00 0.00 0.00 0.00 00:14:45.546 00:14:46.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.481 Nvme0n1 : 9.00 22984.67 89.78 0.00 0.00 0.00 0.00 0.00 00:14:46.481 =================================================================================================================== 00:14:46.481 Total : 22984.67 89.78 0.00 0.00 0.00 0.00 0.00 00:14:46.481 00:14:47.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.422 Nvme0n1 : 10.00 22970.20 89.73 0.00 0.00 0.00 0.00 0.00 00:14:47.422 =================================================================================================================== 00:14:47.422 Total : 22970.20 89.73 0.00 0.00 0.00 0.00 0.00 00:14:47.422 00:14:47.422 00:14:47.422 Latency(us) 00:14:47.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.422 Nvme0n1 : 10.01 22969.99 89.73 0.00 0.00 5568.64 3329.23 22439.53 00:14:47.422 =================================================================================================================== 00:14:47.422 Total : 22969.99 89.73 0.00 0.00 5568.64 3329.23 22439.53 00:14:47.422 0 00:14:47.680 21:30:10 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2820132 00:14:47.680 21:30:10 -- common/autotest_common.sh@936 -- # '[' -z 2820132 ']' 00:14:47.680 21:30:10 -- common/autotest_common.sh@940 -- # kill -0 2820132 00:14:47.680 21:30:10 -- common/autotest_common.sh@941 -- # uname 00:14:47.680 21:30:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:47.680 21:30:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2820132 00:14:47.680 21:30:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:47.680 21:30:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:47.680 21:30:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2820132' 00:14:47.680 killing process with pid 2820132 00:14:47.680 21:30:10 -- common/autotest_common.sh@955 -- # kill 2820132 00:14:47.680 Received shutdown signal, test time was about 10.000000 seconds 00:14:47.680 00:14:47.680 Latency(us) 00:14:47.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.680 =================================================================================================================== 00:14:47.680 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:47.680 21:30:10 -- common/autotest_common.sh@960 -- # wait 2820132 00:14:47.938 21:30:10 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:47.938 21:30:10 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a684d62-33fd-4414-a5f2-7c7e3f2fee1a 00:14:47.938 21:30:10 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:48.196 21:30:10 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:48.196 21:30:10 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:48.196 21:30:10 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 2816814 00:14:48.196 21:30:10 -- target/nvmf_lvs_grow.sh@74 -- # wait 2816814 00:14:48.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 2816814 Killed "${NVMF_APP[@]}" "$@" 00:14:48.197 21:30:11 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:48.197 21:30:11 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:48.197 21:30:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:48.197 21:30:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:48.197 21:30:11 -- common/autotest_common.sh@10 -- # set +x 00:14:48.197 21:30:11 -- nvmf/common.sh@470 -- # nvmfpid=2822750 00:14:48.197 21:30:11 -- nvmf/common.sh@471 -- # waitforlisten 2822750 00:14:48.197 21:30:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:48.197 21:30:11 -- common/autotest_common.sh@817 -- # '[' -z 2822750 ']' 00:14:48.197 21:30:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.197 21:30:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:48.197 21:30:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.197 21:30:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:48.197 21:30:11 -- common/autotest_common.sh@10 -- # set +x 00:14:48.197 [2024-04-24 21:30:11.058304] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:14:48.197 [2024-04-24 21:30:11.058362] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.456 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.456 [2024-04-24 21:30:11.133237] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.456 [2024-04-24 21:30:11.205465] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.456 [2024-04-24 21:30:11.205505] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.456 [2024-04-24 21:30:11.205514] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.456 [2024-04-24 21:30:11.205523] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.456 [2024-04-24 21:30:11.205530] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.456 [2024-04-24 21:30:11.205552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.024 21:30:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:49.024 21:30:11 -- common/autotest_common.sh@850 -- # return 0 00:14:49.024 21:30:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:49.024 21:30:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:49.024 21:30:11 -- common/autotest_common.sh@10 -- # set +x 00:14:49.024 21:30:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.024 21:30:11 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:49.282 [2024-04-24 21:30:12.053955] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:49.282 [2024-04-24 21:30:12.054044] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:49.282 [2024-04-24 21:30:12.054069] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:49.282 21:30:12 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:49.282 21:30:12 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev eef8f488-1b23-46a4-bf10-7f39f8fe85c4 00:14:49.282 21:30:12 -- common/autotest_common.sh@885 -- # local bdev_name=eef8f488-1b23-46a4-bf10-7f39f8fe85c4 00:14:49.282 21:30:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:49.282 21:30:12 -- common/autotest_common.sh@887 -- # local i 00:14:49.282 21:30:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:49.282 21:30:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:49.282 21:30:12 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:49.540 21:30:12 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b eef8f488-1b23-46a4-bf10-7f39f8fe85c4 -t 2000 00:14:49.540 [ 00:14:49.540 { 00:14:49.540 "name": "eef8f488-1b23-46a4-bf10-7f39f8fe85c4", 00:14:49.540 "aliases": [ 00:14:49.540 "lvs/lvol" 00:14:49.540 ], 00:14:49.540 "product_name": "Logical Volume", 00:14:49.540 "block_size": 4096, 00:14:49.540 "num_blocks": 38912, 00:14:49.540 "uuid": "eef8f488-1b23-46a4-bf10-7f39f8fe85c4", 00:14:49.540 "assigned_rate_limits": { 00:14:49.540 "rw_ios_per_sec": 0, 00:14:49.540 "rw_mbytes_per_sec": 0, 00:14:49.540 "r_mbytes_per_sec": 0, 00:14:49.540 "w_mbytes_per_sec": 0 00:14:49.540 }, 00:14:49.540 "claimed": false, 00:14:49.540 "zoned": false, 00:14:49.540 "supported_io_types": { 00:14:49.540 "read": true, 00:14:49.540 "write": true, 00:14:49.540 "unmap": true, 00:14:49.540 "write_zeroes": true, 00:14:49.540 "flush": false, 00:14:49.540 "reset": true, 00:14:49.540 "compare": false, 00:14:49.540 "compare_and_write": false, 00:14:49.540 "abort": false, 00:14:49.540 "nvme_admin": false, 00:14:49.540 "nvme_io": false 00:14:49.540 }, 00:14:49.540 "driver_specific": { 00:14:49.540 "lvol": { 00:14:49.540 "lvol_store_uuid": "1a684d62-33fd-4414-a5f2-7c7e3f2fee1a", 00:14:49.540 "base_bdev": "aio_bdev", 00:14:49.540 "thin_provision": false, 00:14:49.540 "snapshot": false, 00:14:49.540 "clone": false, 00:14:49.540 "esnap_clone": false 00:14:49.540 } 00:14:49.540 } 00:14:49.540 } 00:14:49.540 ] 00:14:49.540 21:30:12 -- common/autotest_common.sh@893 -- # return 0 00:14:49.540 21:30:12 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a684d62-33fd-4414-a5f2-7c7e3f2fee1a 00:14:49.540 21:30:12 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:49.798 21:30:12 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:49.798 21:30:12 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a684d62-33fd-4414-a5f2-7c7e3f2fee1a 00:14:49.798 21:30:12 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:50.056 21:30:12 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:50.056 21:30:12 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:50.056 [2024-04-24 21:30:12.890218] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:50.056 21:30:12 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a684d62-33fd-4414-a5f2-7c7e3f2fee1a 00:14:50.056 21:30:12 -- common/autotest_common.sh@638 -- # local es=0 00:14:50.056 21:30:12 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a684d62-33fd-4414-a5f2-7c7e3f2fee1a 00:14:50.056 21:30:12 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:50.056 21:30:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:50.056 21:30:12 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:50.056 21:30:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:50.056 21:30:12 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:50.056 21:30:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:50.056 21:30:12 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:50.056 21:30:12 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:50.056 21:30:12 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a684d62-33fd-4414-a5f2-7c7e3f2fee1a 00:14:50.314 request: 00:14:50.314 { 00:14:50.314 "uuid": "1a684d62-33fd-4414-a5f2-7c7e3f2fee1a", 00:14:50.314 "method": "bdev_lvol_get_lvstores", 00:14:50.314 "req_id": 1 00:14:50.314 } 00:14:50.314 Got JSON-RPC error response 00:14:50.314 response: 00:14:50.314 { 00:14:50.314 "code": -19, 00:14:50.314 "message": "No such device" 00:14:50.314 } 00:14:50.314 21:30:13 -- common/autotest_common.sh@641 -- # es=1 00:14:50.314 21:30:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:50.314 21:30:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:50.314 21:30:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:50.314 21:30:13 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:50.574 aio_bdev 00:14:50.574 21:30:13 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev eef8f488-1b23-46a4-bf10-7f39f8fe85c4 00:14:50.574 21:30:13 -- common/autotest_common.sh@885 -- # local bdev_name=eef8f488-1b23-46a4-bf10-7f39f8fe85c4 00:14:50.574 21:30:13 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:50.574 21:30:13 -- common/autotest_common.sh@887 -- # local i 00:14:50.574 21:30:13 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:50.574 21:30:13 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:50.574 21:30:13 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:50.574 21:30:13 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b eef8f488-1b23-46a4-bf10-7f39f8fe85c4 -t 2000 00:14:50.837 [ 00:14:50.837 { 00:14:50.837 "name": "eef8f488-1b23-46a4-bf10-7f39f8fe85c4", 00:14:50.837 "aliases": [ 00:14:50.837 "lvs/lvol" 00:14:50.837 ], 00:14:50.837 "product_name": "Logical Volume", 00:14:50.837 "block_size": 4096, 00:14:50.837 "num_blocks": 38912, 00:14:50.837 "uuid": "eef8f488-1b23-46a4-bf10-7f39f8fe85c4", 00:14:50.837 "assigned_rate_limits": { 00:14:50.837 "rw_ios_per_sec": 0, 00:14:50.837 "rw_mbytes_per_sec": 0, 00:14:50.837 "r_mbytes_per_sec": 0, 00:14:50.837 "w_mbytes_per_sec": 0 00:14:50.837 }, 00:14:50.837 "claimed": false, 00:14:50.837 "zoned": false, 00:14:50.837 "supported_io_types": { 00:14:50.837 "read": true, 00:14:50.837 "write": true, 00:14:50.837 "unmap": true, 00:14:50.837 "write_zeroes": true, 00:14:50.837 "flush": false, 00:14:50.837 "reset": true, 00:14:50.837 "compare": false, 00:14:50.837 "compare_and_write": false, 00:14:50.837 "abort": false, 00:14:50.837 "nvme_admin": false, 00:14:50.837 "nvme_io": false 00:14:50.837 }, 00:14:50.837 "driver_specific": { 00:14:50.837 "lvol": { 00:14:50.837 "lvol_store_uuid": "1a684d62-33fd-4414-a5f2-7c7e3f2fee1a", 00:14:50.837 "base_bdev": "aio_bdev", 00:14:50.837 "thin_provision": false, 00:14:50.837 "snapshot": false, 00:14:50.837 "clone": false, 00:14:50.837 "esnap_clone": false 00:14:50.837 } 00:14:50.837 } 00:14:50.837 } 00:14:50.837 ] 00:14:50.837 21:30:13 -- common/autotest_common.sh@893 -- # return 0 00:14:50.837 21:30:13 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:50.837 21:30:13 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a684d62-33fd-4414-a5f2-7c7e3f2fee1a 00:14:51.095 21:30:13 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:51.095 21:30:13 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:51.095 21:30:13 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a684d62-33fd-4414-a5f2-7c7e3f2fee1a 00:14:51.095 21:30:13 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:51.095 21:30:13 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete eef8f488-1b23-46a4-bf10-7f39f8fe85c4 00:14:51.354 21:30:14 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1a684d62-33fd-4414-a5f2-7c7e3f2fee1a 00:14:51.613 21:30:14 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:51.613 21:30:14 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:51.613 00:14:51.613 real 0m17.274s 00:14:51.613 user 0m43.465s 00:14:51.613 sys 0m4.898s 00:14:51.613 21:30:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:51.613 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:14:51.613 ************************************ 00:14:51.613 END TEST lvs_grow_dirty 00:14:51.613 ************************************ 00:14:51.872 21:30:14 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:51.872 21:30:14 -- common/autotest_common.sh@794 -- # type=--id 00:14:51.872 21:30:14 -- common/autotest_common.sh@795 -- # id=0 00:14:51.872 21:30:14 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:14:51.872 21:30:14 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:51.872 21:30:14 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:14:51.872 21:30:14 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:14:51.872 21:30:14 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:14:51.872 21:30:14 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:51.872 nvmf_trace.0 00:14:51.872 21:30:14 -- common/autotest_common.sh@809 -- # return 0 00:14:51.872 21:30:14 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:51.872 21:30:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:51.872 21:30:14 -- nvmf/common.sh@117 -- # sync 00:14:51.872 21:30:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:51.872 21:30:14 -- nvmf/common.sh@120 -- # set +e 00:14:51.872 21:30:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:51.872 21:30:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:51.872 rmmod nvme_tcp 00:14:51.872 rmmod nvme_fabrics 00:14:51.872 rmmod nvme_keyring 00:14:51.872 21:30:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:51.872 21:30:14 -- nvmf/common.sh@124 -- # set -e 00:14:51.872 21:30:14 -- nvmf/common.sh@125 -- # return 0 00:14:51.872 21:30:14 -- nvmf/common.sh@478 -- # '[' -n 2822750 ']' 00:14:51.872 21:30:14 -- nvmf/common.sh@479 -- # killprocess 2822750 00:14:51.872 21:30:14 -- common/autotest_common.sh@936 -- # '[' -z 2822750 ']' 00:14:51.872 21:30:14 -- common/autotest_common.sh@940 -- # kill -0 2822750 00:14:51.872 21:30:14 -- common/autotest_common.sh@941 -- # uname 00:14:51.872 21:30:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:51.872 21:30:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2822750 00:14:51.872 21:30:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:51.872 21:30:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:51.872 21:30:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2822750' 00:14:51.872 killing process with pid 2822750 00:14:51.872 21:30:14 -- common/autotest_common.sh@955 -- # kill 2822750 00:14:51.872 21:30:14 -- common/autotest_common.sh@960 -- # wait 2822750 00:14:52.131 21:30:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:52.131 21:30:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:52.131 21:30:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:52.131 21:30:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:52.131 21:30:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:52.131 21:30:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.131 21:30:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.131 21:30:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.663 21:30:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:54.663 00:14:54.663 real 0m43.717s 00:14:54.663 user 1m4.284s 00:14:54.663 sys 0m12.716s 00:14:54.663 21:30:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:54.663 21:30:16 -- common/autotest_common.sh@10 -- # set +x 00:14:54.663 ************************************ 00:14:54.663 END TEST nvmf_lvs_grow 00:14:54.663 ************************************ 00:14:54.663 21:30:17 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:54.663 21:30:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:54.663 21:30:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:54.663 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:14:54.663 ************************************ 00:14:54.663 START TEST nvmf_bdev_io_wait 00:14:54.663 ************************************ 00:14:54.663 21:30:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:54.663 * Looking for test storage... 00:14:54.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:54.663 21:30:17 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:54.663 21:30:17 -- nvmf/common.sh@7 -- # uname -s 00:14:54.663 21:30:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.663 21:30:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.663 21:30:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.663 21:30:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.663 21:30:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.663 21:30:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.663 21:30:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.663 21:30:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.663 21:30:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.663 21:30:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.663 21:30:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:54.663 21:30:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:54.663 21:30:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.663 21:30:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.663 21:30:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:54.663 21:30:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:54.663 21:30:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:54.663 21:30:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.663 21:30:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.663 21:30:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.663 21:30:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.663 21:30:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.663 21:30:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.663 21:30:17 -- paths/export.sh@5 -- # export PATH 00:14:54.663 21:30:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.664 21:30:17 -- nvmf/common.sh@47 -- # : 0 00:14:54.664 21:30:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:54.664 21:30:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:54.664 21:30:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:54.664 21:30:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.664 21:30:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.664 21:30:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:54.664 21:30:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:54.664 21:30:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:54.664 21:30:17 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:54.664 21:30:17 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:54.664 21:30:17 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:54.664 21:30:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:54.664 21:30:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.664 21:30:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:54.664 21:30:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:54.664 21:30:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:54.664 21:30:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.664 21:30:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.664 21:30:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.664 21:30:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:54.664 21:30:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:54.664 21:30:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:54.664 21:30:17 -- common/autotest_common.sh@10 -- # set +x 00:15:01.229 21:30:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:01.229 21:30:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:01.229 21:30:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:01.229 21:30:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:01.229 21:30:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:01.229 21:30:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:01.229 21:30:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:01.229 21:30:23 -- nvmf/common.sh@295 -- # net_devs=() 00:15:01.229 21:30:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:01.229 21:30:23 -- nvmf/common.sh@296 -- # e810=() 00:15:01.229 21:30:23 -- nvmf/common.sh@296 -- # local -ga e810 00:15:01.229 21:30:23 -- nvmf/common.sh@297 -- # x722=() 00:15:01.229 21:30:23 -- nvmf/common.sh@297 -- # local -ga x722 00:15:01.229 21:30:23 -- nvmf/common.sh@298 -- # mlx=() 00:15:01.229 21:30:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:01.229 21:30:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:01.229 21:30:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:01.229 21:30:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:01.229 21:30:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:01.229 21:30:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:01.229 21:30:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:01.229 21:30:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:01.229 21:30:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:01.229 21:30:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:01.229 21:30:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:01.229 21:30:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:01.229 21:30:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:01.229 21:30:23 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:01.229 21:30:23 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:01.229 21:30:23 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:01.229 21:30:23 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:01.229 21:30:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:01.229 21:30:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.229 21:30:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:01.229 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:01.229 21:30:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.229 21:30:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.229 21:30:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.229 21:30:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.229 21:30:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.229 21:30:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.229 21:30:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:01.229 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:01.229 21:30:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.229 21:30:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.229 21:30:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.229 21:30:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.229 21:30:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.229 21:30:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:01.229 21:30:23 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:01.229 21:30:23 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:01.229 21:30:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.229 21:30:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.229 21:30:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:01.229 21:30:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.229 21:30:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:01.229 Found net devices under 0000:af:00.0: cvl_0_0 00:15:01.229 21:30:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.229 21:30:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.229 21:30:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.229 21:30:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:01.229 21:30:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.229 21:30:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:01.229 Found net devices under 0000:af:00.1: cvl_0_1 00:15:01.229 21:30:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.229 21:30:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:01.229 21:30:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:01.229 21:30:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:01.229 21:30:23 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:01.229 21:30:23 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:01.229 21:30:23 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.229 21:30:23 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:01.229 21:30:23 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:01.229 21:30:23 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:01.229 21:30:23 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:01.229 21:30:23 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:01.229 21:30:23 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:01.229 21:30:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:01.229 21:30:23 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.229 21:30:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:01.229 21:30:23 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:01.229 21:30:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:01.229 21:30:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:01.489 21:30:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:01.489 21:30:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:01.489 21:30:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:01.489 21:30:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:01.489 21:30:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:01.489 21:30:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:01.489 21:30:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:01.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:15:01.489 00:15:01.489 --- 10.0.0.2 ping statistics --- 00:15:01.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.489 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:15:01.489 21:30:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:01.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:15:01.489 00:15:01.489 --- 10.0.0.1 ping statistics --- 00:15:01.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.489 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:15:01.489 21:30:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.489 21:30:24 -- nvmf/common.sh@411 -- # return 0 00:15:01.489 21:30:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:01.489 21:30:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.489 21:30:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:01.489 21:30:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:01.489 21:30:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.489 21:30:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:01.489 21:30:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:01.489 21:30:24 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:01.489 21:30:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:01.489 21:30:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:01.489 21:30:24 -- common/autotest_common.sh@10 -- # set +x 00:15:01.489 21:30:24 -- nvmf/common.sh@470 -- # nvmfpid=2827137 00:15:01.489 21:30:24 -- nvmf/common.sh@471 -- # waitforlisten 2827137 00:15:01.489 21:30:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:01.489 21:30:24 -- common/autotest_common.sh@817 -- # '[' -z 2827137 ']' 00:15:01.489 21:30:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.489 21:30:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:01.489 21:30:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.489 21:30:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:01.489 21:30:24 -- common/autotest_common.sh@10 -- # set +x 00:15:01.748 [2024-04-24 21:30:24.404865] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:15:01.748 [2024-04-24 21:30:24.404909] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.748 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.748 [2024-04-24 21:30:24.479856] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:01.748 [2024-04-24 21:30:24.555698] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.748 [2024-04-24 21:30:24.555740] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.748 [2024-04-24 21:30:24.555750] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.748 [2024-04-24 21:30:24.555759] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.748 [2024-04-24 21:30:24.555766] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.749 [2024-04-24 21:30:24.555816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.749 [2024-04-24 21:30:24.555912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.749 [2024-04-24 21:30:24.555997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:01.749 [2024-04-24 21:30:24.555998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.316 21:30:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:02.316 21:30:25 -- common/autotest_common.sh@850 -- # return 0 00:15:02.576 21:30:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:02.576 21:30:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:02.576 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:15:02.576 21:30:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.576 21:30:25 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:02.576 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:02.576 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:15:02.576 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:02.576 21:30:25 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:02.576 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:02.576 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:15:02.576 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:02.576 21:30:25 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:02.576 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:02.576 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:15:02.576 [2024-04-24 21:30:25.322276] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.576 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:02.576 21:30:25 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:02.576 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:02.576 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:15:02.576 Malloc0 00:15:02.576 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:02.576 21:30:25 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:02.576 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:02.576 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:15:02.576 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:02.576 21:30:25 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:02.576 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:02.576 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:15:02.576 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:02.576 21:30:25 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.576 21:30:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:02.576 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:15:02.576 [2024-04-24 21:30:25.383983] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.576 21:30:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:02.577 21:30:25 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2827417 00:15:02.577 21:30:25 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:02.577 21:30:25 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:02.577 21:30:25 -- target/bdev_io_wait.sh@30 -- # READ_PID=2827419 00:15:02.577 21:30:25 -- nvmf/common.sh@521 -- # config=() 00:15:02.577 21:30:25 -- nvmf/common.sh@521 -- # local subsystem config 00:15:02.577 21:30:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:02.577 21:30:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:02.577 { 00:15:02.577 "params": { 00:15:02.577 "name": "Nvme$subsystem", 00:15:02.577 "trtype": "$TEST_TRANSPORT", 00:15:02.577 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:02.577 "adrfam": "ipv4", 00:15:02.577 "trsvcid": "$NVMF_PORT", 00:15:02.577 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:02.577 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:02.577 "hdgst": ${hdgst:-false}, 00:15:02.577 "ddgst": ${ddgst:-false} 00:15:02.577 }, 00:15:02.577 "method": "bdev_nvme_attach_controller" 00:15:02.577 } 00:15:02.577 EOF 00:15:02.577 )") 00:15:02.577 21:30:25 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:02.577 21:30:25 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:02.577 21:30:25 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2827421 00:15:02.577 21:30:25 -- nvmf/common.sh@521 -- # config=() 00:15:02.577 21:30:25 -- nvmf/common.sh@521 -- # local subsystem config 00:15:02.577 21:30:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:02.577 21:30:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:02.577 { 00:15:02.577 "params": { 00:15:02.577 "name": "Nvme$subsystem", 00:15:02.577 "trtype": "$TEST_TRANSPORT", 00:15:02.577 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:02.577 "adrfam": "ipv4", 00:15:02.577 "trsvcid": "$NVMF_PORT", 00:15:02.577 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:02.577 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:02.577 "hdgst": ${hdgst:-false}, 00:15:02.577 "ddgst": ${ddgst:-false} 00:15:02.577 }, 00:15:02.577 "method": "bdev_nvme_attach_controller" 00:15:02.577 } 00:15:02.577 EOF 00:15:02.577 )") 00:15:02.577 21:30:25 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:02.577 21:30:25 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:02.577 21:30:25 -- nvmf/common.sh@543 -- # cat 00:15:02.577 21:30:25 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2827424 00:15:02.577 21:30:25 -- target/bdev_io_wait.sh@35 -- # sync 00:15:02.577 21:30:25 -- nvmf/common.sh@521 -- # config=() 00:15:02.577 21:30:25 -- nvmf/common.sh@521 -- # local subsystem config 00:15:02.577 21:30:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:02.577 21:30:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:02.577 { 00:15:02.577 "params": { 00:15:02.577 "name": "Nvme$subsystem", 00:15:02.577 "trtype": "$TEST_TRANSPORT", 00:15:02.577 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:02.577 "adrfam": "ipv4", 00:15:02.577 "trsvcid": "$NVMF_PORT", 00:15:02.577 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:02.577 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:02.577 "hdgst": ${hdgst:-false}, 00:15:02.577 "ddgst": ${ddgst:-false} 00:15:02.577 }, 00:15:02.577 "method": "bdev_nvme_attach_controller" 00:15:02.577 } 00:15:02.577 EOF 00:15:02.577 )") 00:15:02.577 21:30:25 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:02.577 21:30:25 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:02.577 21:30:25 -- nvmf/common.sh@543 -- # cat 00:15:02.577 21:30:25 -- nvmf/common.sh@521 -- # config=() 00:15:02.577 21:30:25 -- nvmf/common.sh@521 -- # local subsystem config 00:15:02.577 21:30:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:02.577 21:30:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:02.577 { 00:15:02.577 "params": { 00:15:02.577 "name": "Nvme$subsystem", 00:15:02.577 "trtype": "$TEST_TRANSPORT", 00:15:02.577 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:02.577 "adrfam": "ipv4", 00:15:02.577 "trsvcid": "$NVMF_PORT", 00:15:02.577 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:02.577 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:02.577 "hdgst": ${hdgst:-false}, 00:15:02.577 "ddgst": ${ddgst:-false} 00:15:02.577 }, 00:15:02.577 "method": "bdev_nvme_attach_controller" 00:15:02.577 } 00:15:02.577 EOF 00:15:02.577 )") 00:15:02.577 21:30:25 -- nvmf/common.sh@543 -- # cat 00:15:02.577 21:30:25 -- target/bdev_io_wait.sh@37 -- # wait 2827417 00:15:02.577 21:30:25 -- nvmf/common.sh@543 -- # cat 00:15:02.577 21:30:25 -- nvmf/common.sh@545 -- # jq . 00:15:02.577 21:30:25 -- nvmf/common.sh@545 -- # jq . 00:15:02.577 21:30:25 -- nvmf/common.sh@546 -- # IFS=, 00:15:02.577 21:30:25 -- nvmf/common.sh@545 -- # jq . 00:15:02.577 21:30:25 -- nvmf/common.sh@546 -- # IFS=, 00:15:02.577 21:30:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:02.577 "params": { 00:15:02.577 "name": "Nvme1", 00:15:02.577 "trtype": "tcp", 00:15:02.577 "traddr": "10.0.0.2", 00:15:02.577 "adrfam": "ipv4", 00:15:02.577 "trsvcid": "4420", 00:15:02.577 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.577 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:02.577 "hdgst": false, 00:15:02.577 "ddgst": false 00:15:02.577 }, 00:15:02.577 "method": "bdev_nvme_attach_controller" 00:15:02.577 }' 00:15:02.577 21:30:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:02.577 "params": { 00:15:02.577 "name": "Nvme1", 00:15:02.577 "trtype": "tcp", 00:15:02.577 "traddr": "10.0.0.2", 00:15:02.577 "adrfam": "ipv4", 00:15:02.577 "trsvcid": "4420", 00:15:02.577 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.577 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:02.577 "hdgst": false, 00:15:02.577 "ddgst": false 00:15:02.577 }, 00:15:02.577 "method": "bdev_nvme_attach_controller" 00:15:02.577 }' 00:15:02.577 21:30:25 -- nvmf/common.sh@545 -- # jq . 00:15:02.577 21:30:25 -- nvmf/common.sh@546 -- # IFS=, 00:15:02.577 21:30:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:02.577 "params": { 00:15:02.577 "name": "Nvme1", 00:15:02.577 "trtype": "tcp", 00:15:02.577 "traddr": "10.0.0.2", 00:15:02.577 "adrfam": "ipv4", 00:15:02.577 "trsvcid": "4420", 00:15:02.577 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.577 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:02.577 "hdgst": false, 00:15:02.577 "ddgst": false 00:15:02.577 }, 00:15:02.577 "method": "bdev_nvme_attach_controller" 00:15:02.577 }' 00:15:02.577 21:30:25 -- nvmf/common.sh@546 -- # IFS=, 00:15:02.577 21:30:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:02.577 "params": { 00:15:02.577 "name": "Nvme1", 00:15:02.577 "trtype": "tcp", 00:15:02.577 "traddr": "10.0.0.2", 00:15:02.577 "adrfam": "ipv4", 00:15:02.577 "trsvcid": "4420", 00:15:02.577 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.577 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:02.577 "hdgst": false, 00:15:02.577 "ddgst": false 00:15:02.577 }, 00:15:02.577 "method": "bdev_nvme_attach_controller" 00:15:02.577 }' 00:15:02.577 [2024-04-24 21:30:25.436924] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:15:02.577 [2024-04-24 21:30:25.436977] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:02.577 [2024-04-24 21:30:25.437888] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:15:02.577 [2024-04-24 21:30:25.437934] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:02.577 [2024-04-24 21:30:25.438040] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:15:02.577 [2024-04-24 21:30:25.438080] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:02.577 [2024-04-24 21:30:25.438485] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:15:02.577 [2024-04-24 21:30:25.438526] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:02.836 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.836 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.836 [2024-04-24 21:30:25.612741] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.836 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.836 [2024-04-24 21:30:25.686678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:02.836 [2024-04-24 21:30:25.701522] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.095 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.095 [2024-04-24 21:30:25.774976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:03.095 [2024-04-24 21:30:25.792436] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.095 [2024-04-24 21:30:25.867652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:03.095 [2024-04-24 21:30:25.885208] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.095 [2024-04-24 21:30:25.971214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:03.355 Running I/O for 1 seconds... 00:15:03.355 Running I/O for 1 seconds... 00:15:03.355 Running I/O for 1 seconds... 00:15:03.355 Running I/O for 1 seconds... 00:15:04.293 00:15:04.293 Latency(us) 00:15:04.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.293 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:04.293 Nvme1n1 : 1.00 265105.97 1035.57 0.00 0.00 481.60 194.97 642.25 00:15:04.293 =================================================================================================================== 00:15:04.293 Total : 265105.97 1035.57 0.00 0.00 481.60 194.97 642.25 00:15:04.293 00:15:04.293 Latency(us) 00:15:04.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.293 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:04.293 Nvme1n1 : 1.01 7763.39 30.33 0.00 0.00 16359.19 4666.16 28101.84 00:15:04.293 =================================================================================================================== 00:15:04.293 Total : 7763.39 30.33 0.00 0.00 16359.19 4666.16 28101.84 00:15:04.293 00:15:04.293 Latency(us) 00:15:04.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.293 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:04.293 Nvme1n1 : 1.01 7379.07 28.82 0.00 0.00 17290.00 6317.67 26528.97 00:15:04.293 =================================================================================================================== 00:15:04.293 Total : 7379.07 28.82 0.00 0.00 17290.00 6317.67 26528.97 00:15:04.293 00:15:04.293 Latency(us) 00:15:04.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.293 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:04.293 Nvme1n1 : 1.01 12299.88 48.05 0.00 0.00 10376.12 6239.03 20971.52 00:15:04.293 =================================================================================================================== 00:15:04.293 Total : 12299.88 48.05 0.00 0.00 10376.12 6239.03 20971.52 00:15:04.553 21:30:27 -- target/bdev_io_wait.sh@38 -- # wait 2827419 00:15:04.553 21:30:27 -- target/bdev_io_wait.sh@39 -- # wait 2827421 00:15:04.553 21:30:27 -- target/bdev_io_wait.sh@40 -- # wait 2827424 00:15:04.553 21:30:27 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:04.553 21:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:04.553 21:30:27 -- common/autotest_common.sh@10 -- # set +x 00:15:04.553 21:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:04.553 21:30:27 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:04.553 21:30:27 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:04.553 21:30:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:04.553 21:30:27 -- nvmf/common.sh@117 -- # sync 00:15:04.553 21:30:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:04.553 21:30:27 -- nvmf/common.sh@120 -- # set +e 00:15:04.553 21:30:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:04.553 21:30:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:04.553 rmmod nvme_tcp 00:15:04.553 rmmod nvme_fabrics 00:15:04.553 rmmod nvme_keyring 00:15:04.553 21:30:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:04.553 21:30:27 -- nvmf/common.sh@124 -- # set -e 00:15:04.553 21:30:27 -- nvmf/common.sh@125 -- # return 0 00:15:04.553 21:30:27 -- nvmf/common.sh@478 -- # '[' -n 2827137 ']' 00:15:04.553 21:30:27 -- nvmf/common.sh@479 -- # killprocess 2827137 00:15:04.553 21:30:27 -- common/autotest_common.sh@936 -- # '[' -z 2827137 ']' 00:15:04.553 21:30:27 -- common/autotest_common.sh@940 -- # kill -0 2827137 00:15:04.553 21:30:27 -- common/autotest_common.sh@941 -- # uname 00:15:04.553 21:30:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:04.553 21:30:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2827137 00:15:04.812 21:30:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:04.812 21:30:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:04.812 21:30:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2827137' 00:15:04.812 killing process with pid 2827137 00:15:04.812 21:30:27 -- common/autotest_common.sh@955 -- # kill 2827137 00:15:04.812 21:30:27 -- common/autotest_common.sh@960 -- # wait 2827137 00:15:04.812 21:30:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:04.812 21:30:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:04.812 21:30:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:04.812 21:30:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:04.812 21:30:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:04.812 21:30:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.812 21:30:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.812 21:30:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.349 21:30:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:07.349 00:15:07.349 real 0m12.590s 00:15:07.349 user 0m19.454s 00:15:07.349 sys 0m7.259s 00:15:07.349 21:30:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:07.349 21:30:29 -- common/autotest_common.sh@10 -- # set +x 00:15:07.349 ************************************ 00:15:07.349 END TEST nvmf_bdev_io_wait 00:15:07.349 ************************************ 00:15:07.349 21:30:29 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:07.349 21:30:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:07.349 21:30:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:07.349 21:30:29 -- common/autotest_common.sh@10 -- # set +x 00:15:07.349 ************************************ 00:15:07.349 START TEST nvmf_queue_depth 00:15:07.349 ************************************ 00:15:07.349 21:30:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:07.349 * Looking for test storage... 00:15:07.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:07.349 21:30:30 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.349 21:30:30 -- nvmf/common.sh@7 -- # uname -s 00:15:07.349 21:30:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.349 21:30:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.349 21:30:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.349 21:30:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.349 21:30:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.349 21:30:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.349 21:30:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.349 21:30:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.349 21:30:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.349 21:30:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.349 21:30:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:07.349 21:30:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:07.349 21:30:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.349 21:30:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.349 21:30:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.349 21:30:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.349 21:30:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:07.349 21:30:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.349 21:30:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.349 21:30:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.349 21:30:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.349 21:30:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.349 21:30:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.349 21:30:30 -- paths/export.sh@5 -- # export PATH 00:15:07.349 21:30:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.349 21:30:30 -- nvmf/common.sh@47 -- # : 0 00:15:07.349 21:30:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:07.349 21:30:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:07.349 21:30:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.349 21:30:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.349 21:30:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.349 21:30:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:07.349 21:30:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:07.349 21:30:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:07.349 21:30:30 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:07.349 21:30:30 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:07.349 21:30:30 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:07.349 21:30:30 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:07.349 21:30:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:07.349 21:30:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.349 21:30:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:07.349 21:30:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:07.349 21:30:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:07.349 21:30:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.349 21:30:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.349 21:30:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.350 21:30:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:07.350 21:30:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:07.350 21:30:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:07.350 21:30:30 -- common/autotest_common.sh@10 -- # set +x 00:15:13.921 21:30:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:13.921 21:30:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:13.921 21:30:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:13.921 21:30:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:13.921 21:30:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:13.921 21:30:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:13.921 21:30:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:13.921 21:30:36 -- nvmf/common.sh@295 -- # net_devs=() 00:15:13.921 21:30:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:13.921 21:30:36 -- nvmf/common.sh@296 -- # e810=() 00:15:13.921 21:30:36 -- nvmf/common.sh@296 -- # local -ga e810 00:15:13.921 21:30:36 -- nvmf/common.sh@297 -- # x722=() 00:15:13.921 21:30:36 -- nvmf/common.sh@297 -- # local -ga x722 00:15:13.921 21:30:36 -- nvmf/common.sh@298 -- # mlx=() 00:15:13.921 21:30:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:13.921 21:30:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:13.921 21:30:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:13.921 21:30:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:13.921 21:30:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:13.921 21:30:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:13.921 21:30:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:13.921 21:30:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:13.921 21:30:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:13.921 21:30:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:13.921 21:30:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:13.921 21:30:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:13.921 21:30:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:13.921 21:30:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:13.921 21:30:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:13.921 21:30:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:13.921 21:30:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:13.921 21:30:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:13.921 21:30:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:13.921 21:30:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:13.921 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:13.921 21:30:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:13.921 21:30:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:13.921 21:30:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:13.921 21:30:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:13.921 21:30:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:13.921 21:30:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:13.921 21:30:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:13.921 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:13.921 21:30:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:13.921 21:30:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:13.921 21:30:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:13.921 21:30:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:13.921 21:30:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:13.921 21:30:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:13.921 21:30:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:13.921 21:30:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:13.922 21:30:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:13.922 21:30:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.922 21:30:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:13.922 21:30:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.922 21:30:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:13.922 Found net devices under 0000:af:00.0: cvl_0_0 00:15:13.922 21:30:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.922 21:30:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:13.922 21:30:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.922 21:30:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:13.922 21:30:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.922 21:30:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:13.922 Found net devices under 0000:af:00.1: cvl_0_1 00:15:13.922 21:30:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.922 21:30:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:13.922 21:30:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:13.922 21:30:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:13.922 21:30:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:13.922 21:30:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:13.922 21:30:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:13.922 21:30:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:13.922 21:30:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:13.922 21:30:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:13.922 21:30:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:13.922 21:30:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:13.922 21:30:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:13.922 21:30:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:13.922 21:30:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:13.922 21:30:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:13.922 21:30:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:13.922 21:30:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:13.922 21:30:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:13.922 21:30:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:13.922 21:30:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:13.922 21:30:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:13.922 21:30:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:13.922 21:30:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:13.922 21:30:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:13.922 21:30:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:13.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:13.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:15:13.922 00:15:13.922 --- 10.0.0.2 ping statistics --- 00:15:13.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.922 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:15:13.922 21:30:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:13.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:13.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:15:13.922 00:15:13.922 --- 10.0.0.1 ping statistics --- 00:15:13.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.922 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:15:13.922 21:30:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:13.922 21:30:36 -- nvmf/common.sh@411 -- # return 0 00:15:13.922 21:30:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:13.922 21:30:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:13.922 21:30:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:13.922 21:30:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:13.922 21:30:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:13.922 21:30:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:13.922 21:30:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:13.922 21:30:36 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:13.922 21:30:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:13.922 21:30:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:13.922 21:30:36 -- common/autotest_common.sh@10 -- # set +x 00:15:13.922 21:30:36 -- nvmf/common.sh@470 -- # nvmfpid=2831413 00:15:13.922 21:30:36 -- nvmf/common.sh@471 -- # waitforlisten 2831413 00:15:13.922 21:30:36 -- common/autotest_common.sh@817 -- # '[' -z 2831413 ']' 00:15:13.922 21:30:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.922 21:30:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:13.922 21:30:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.922 21:30:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:13.922 21:30:36 -- common/autotest_common.sh@10 -- # set +x 00:15:13.922 21:30:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:13.922 [2024-04-24 21:30:36.468932] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:15:13.922 [2024-04-24 21:30:36.468978] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.922 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.922 [2024-04-24 21:30:36.545488] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.922 [2024-04-24 21:30:36.615932] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.922 [2024-04-24 21:30:36.615967] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.922 [2024-04-24 21:30:36.615976] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.922 [2024-04-24 21:30:36.615985] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.922 [2024-04-24 21:30:36.615992] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.922 [2024-04-24 21:30:36.616017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.494 21:30:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:14.494 21:30:37 -- common/autotest_common.sh@850 -- # return 0 00:15:14.494 21:30:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:14.494 21:30:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:14.494 21:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:14.494 21:30:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.494 21:30:37 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:14.494 21:30:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.494 21:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:14.494 [2024-04-24 21:30:37.305149] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.494 21:30:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.494 21:30:37 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:14.494 21:30:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.494 21:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:14.494 Malloc0 00:15:14.494 21:30:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.494 21:30:37 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:14.494 21:30:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.494 21:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:14.494 21:30:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.494 21:30:37 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:14.494 21:30:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.494 21:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:14.494 21:30:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.494 21:30:37 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.494 21:30:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.494 21:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:14.494 [2024-04-24 21:30:37.363744] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.494 21:30:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.494 21:30:37 -- target/queue_depth.sh@30 -- # bdevperf_pid=2831618 00:15:14.494 21:30:37 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:14.494 21:30:37 -- target/queue_depth.sh@33 -- # waitforlisten 2831618 /var/tmp/bdevperf.sock 00:15:14.494 21:30:37 -- common/autotest_common.sh@817 -- # '[' -z 2831618 ']' 00:15:14.494 21:30:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:14.494 21:30:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:14.494 21:30:37 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:14.494 21:30:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:14.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:14.494 21:30:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:14.494 21:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:14.754 [2024-04-24 21:30:37.411968] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:15:14.754 [2024-04-24 21:30:37.412017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831618 ] 00:15:14.754 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.754 [2024-04-24 21:30:37.480529] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.754 [2024-04-24 21:30:37.549070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.691 21:30:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:15.691 21:30:38 -- common/autotest_common.sh@850 -- # return 0 00:15:15.691 21:30:38 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:15.691 21:30:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.691 21:30:38 -- common/autotest_common.sh@10 -- # set +x 00:15:15.691 NVMe0n1 00:15:15.691 21:30:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.691 21:30:38 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:15.691 Running I/O for 10 seconds... 00:15:27.904 00:15:27.904 Latency(us) 00:15:27.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.904 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:27.904 Verification LBA range: start 0x0 length 0x4000 00:15:27.904 NVMe0n1 : 10.07 12697.83 49.60 0.00 0.00 80372.90 18979.23 61236.84 00:15:27.904 =================================================================================================================== 00:15:27.904 Total : 12697.83 49.60 0.00 0.00 80372.90 18979.23 61236.84 00:15:27.904 0 00:15:27.904 21:30:48 -- target/queue_depth.sh@39 -- # killprocess 2831618 00:15:27.904 21:30:48 -- common/autotest_common.sh@936 -- # '[' -z 2831618 ']' 00:15:27.904 21:30:48 -- common/autotest_common.sh@940 -- # kill -0 2831618 00:15:27.904 21:30:48 -- common/autotest_common.sh@941 -- # uname 00:15:27.904 21:30:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:27.904 21:30:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2831618 00:15:27.904 21:30:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:27.904 21:30:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:27.904 21:30:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2831618' 00:15:27.904 killing process with pid 2831618 00:15:27.904 21:30:48 -- common/autotest_common.sh@955 -- # kill 2831618 00:15:27.904 Received shutdown signal, test time was about 10.000000 seconds 00:15:27.904 00:15:27.904 Latency(us) 00:15:27.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.904 =================================================================================================================== 00:15:27.904 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:27.904 21:30:48 -- common/autotest_common.sh@960 -- # wait 2831618 00:15:27.904 21:30:48 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:27.904 21:30:48 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:27.904 21:30:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:27.904 21:30:48 -- nvmf/common.sh@117 -- # sync 00:15:27.904 21:30:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:27.904 21:30:48 -- nvmf/common.sh@120 -- # set +e 00:15:27.904 21:30:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.904 21:30:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:27.904 rmmod nvme_tcp 00:15:27.904 rmmod nvme_fabrics 00:15:27.904 rmmod nvme_keyring 00:15:27.904 21:30:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:27.904 21:30:48 -- nvmf/common.sh@124 -- # set -e 00:15:27.904 21:30:48 -- nvmf/common.sh@125 -- # return 0 00:15:27.904 21:30:48 -- nvmf/common.sh@478 -- # '[' -n 2831413 ']' 00:15:27.904 21:30:48 -- nvmf/common.sh@479 -- # killprocess 2831413 00:15:27.904 21:30:48 -- common/autotest_common.sh@936 -- # '[' -z 2831413 ']' 00:15:27.904 21:30:48 -- common/autotest_common.sh@940 -- # kill -0 2831413 00:15:27.905 21:30:48 -- common/autotest_common.sh@941 -- # uname 00:15:27.905 21:30:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:27.905 21:30:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2831413 00:15:27.905 21:30:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:27.905 21:30:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:27.905 21:30:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2831413' 00:15:27.905 killing process with pid 2831413 00:15:27.905 21:30:48 -- common/autotest_common.sh@955 -- # kill 2831413 00:15:27.905 21:30:48 -- common/autotest_common.sh@960 -- # wait 2831413 00:15:27.905 21:30:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:27.905 21:30:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:27.905 21:30:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:27.905 21:30:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:27.905 21:30:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:27.905 21:30:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.905 21:30:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.905 21:30:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.472 21:30:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:28.472 00:15:28.472 real 0m21.309s 00:15:28.472 user 0m24.785s 00:15:28.472 sys 0m6.727s 00:15:28.472 21:30:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:28.472 21:30:51 -- common/autotest_common.sh@10 -- # set +x 00:15:28.472 ************************************ 00:15:28.472 END TEST nvmf_queue_depth 00:15:28.472 ************************************ 00:15:28.472 21:30:51 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:28.472 21:30:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:28.472 21:30:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:28.472 21:30:51 -- common/autotest_common.sh@10 -- # set +x 00:15:28.731 ************************************ 00:15:28.731 START TEST nvmf_multipath 00:15:28.731 ************************************ 00:15:28.731 21:30:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:28.731 * Looking for test storage... 00:15:28.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:28.731 21:30:51 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.731 21:30:51 -- nvmf/common.sh@7 -- # uname -s 00:15:28.731 21:30:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.731 21:30:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.731 21:30:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.731 21:30:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.731 21:30:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.731 21:30:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.731 21:30:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.731 21:30:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.731 21:30:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.731 21:30:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.731 21:30:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:28.731 21:30:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:28.731 21:30:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.731 21:30:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.731 21:30:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.731 21:30:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.731 21:30:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:28.731 21:30:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.731 21:30:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.731 21:30:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.731 21:30:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.731 21:30:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.731 21:30:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.731 21:30:51 -- paths/export.sh@5 -- # export PATH 00:15:28.731 21:30:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.731 21:30:51 -- nvmf/common.sh@47 -- # : 0 00:15:28.731 21:30:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:28.731 21:30:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:28.731 21:30:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.731 21:30:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.731 21:30:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.731 21:30:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:28.731 21:30:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:28.731 21:30:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:28.731 21:30:51 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:28.731 21:30:51 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:28.731 21:30:51 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:28.731 21:30:51 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:28.731 21:30:51 -- target/multipath.sh@43 -- # nvmftestinit 00:15:28.731 21:30:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:28.731 21:30:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.731 21:30:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:28.731 21:30:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:28.731 21:30:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:28.731 21:30:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.731 21:30:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.731 21:30:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.731 21:30:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:28.731 21:30:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:28.731 21:30:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:28.731 21:30:51 -- common/autotest_common.sh@10 -- # set +x 00:15:35.307 21:30:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:35.307 21:30:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:35.307 21:30:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:35.307 21:30:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:35.307 21:30:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:35.307 21:30:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:35.307 21:30:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:35.307 21:30:57 -- nvmf/common.sh@295 -- # net_devs=() 00:15:35.307 21:30:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:35.307 21:30:57 -- nvmf/common.sh@296 -- # e810=() 00:15:35.307 21:30:57 -- nvmf/common.sh@296 -- # local -ga e810 00:15:35.307 21:30:57 -- nvmf/common.sh@297 -- # x722=() 00:15:35.307 21:30:57 -- nvmf/common.sh@297 -- # local -ga x722 00:15:35.307 21:30:57 -- nvmf/common.sh@298 -- # mlx=() 00:15:35.307 21:30:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:35.307 21:30:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.307 21:30:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.307 21:30:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.307 21:30:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.307 21:30:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.307 21:30:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.307 21:30:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.307 21:30:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.307 21:30:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.307 21:30:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.307 21:30:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.307 21:30:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:35.307 21:30:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:35.307 21:30:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:35.307 21:30:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:35.307 21:30:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:35.307 21:30:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:35.307 21:30:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.307 21:30:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:35.307 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:35.307 21:30:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.307 21:30:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.307 21:30:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.307 21:30:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.307 21:30:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.307 21:30:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.307 21:30:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:35.307 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:35.307 21:30:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.307 21:30:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.307 21:30:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.307 21:30:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.307 21:30:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.307 21:30:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:35.307 21:30:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:35.307 21:30:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:35.307 21:30:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.307 21:30:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.307 21:30:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:35.307 21:30:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.307 21:30:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:35.307 Found net devices under 0000:af:00.0: cvl_0_0 00:15:35.307 21:30:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.307 21:30:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.307 21:30:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.307 21:30:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:35.307 21:30:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.307 21:30:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:35.307 Found net devices under 0000:af:00.1: cvl_0_1 00:15:35.307 21:30:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.307 21:30:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:35.307 21:30:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:35.307 21:30:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:35.307 21:30:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:35.307 21:30:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:35.307 21:30:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.307 21:30:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.307 21:30:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.307 21:30:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:35.307 21:30:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:35.307 21:30:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:35.307 21:30:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:35.307 21:30:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:35.307 21:30:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.307 21:30:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:35.307 21:30:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:35.307 21:30:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:35.307 21:30:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:35.307 21:30:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:35.307 21:30:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:35.308 21:30:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:35.308 21:30:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:35.308 21:30:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:35.308 21:30:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:35.308 21:30:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:35.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:15:35.308 00:15:35.308 --- 10.0.0.2 ping statistics --- 00:15:35.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.308 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:15:35.308 21:30:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:35.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:15:35.567 00:15:35.567 --- 10.0.0.1 ping statistics --- 00:15:35.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.567 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:15:35.567 21:30:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.567 21:30:58 -- nvmf/common.sh@411 -- # return 0 00:15:35.567 21:30:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:35.567 21:30:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.567 21:30:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:35.567 21:30:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:35.567 21:30:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.567 21:30:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:35.567 21:30:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:35.567 21:30:58 -- target/multipath.sh@45 -- # '[' -z ']' 00:15:35.567 21:30:58 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:35.567 only one NIC for nvmf test 00:15:35.567 21:30:58 -- target/multipath.sh@47 -- # nvmftestfini 00:15:35.567 21:30:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:35.567 21:30:58 -- nvmf/common.sh@117 -- # sync 00:15:35.567 21:30:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:35.567 21:30:58 -- nvmf/common.sh@120 -- # set +e 00:15:35.567 21:30:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:35.567 21:30:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:35.567 rmmod nvme_tcp 00:15:35.567 rmmod nvme_fabrics 00:15:35.567 rmmod nvme_keyring 00:15:35.567 21:30:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:35.567 21:30:58 -- nvmf/common.sh@124 -- # set -e 00:15:35.567 21:30:58 -- nvmf/common.sh@125 -- # return 0 00:15:35.567 21:30:58 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:35.567 21:30:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:35.567 21:30:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:35.567 21:30:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:35.567 21:30:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:35.567 21:30:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:35.567 21:30:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.567 21:30:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.567 21:30:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.107 21:31:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:38.107 21:31:00 -- target/multipath.sh@48 -- # exit 0 00:15:38.107 21:31:00 -- target/multipath.sh@1 -- # nvmftestfini 00:15:38.107 21:31:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:38.107 21:31:00 -- nvmf/common.sh@117 -- # sync 00:15:38.107 21:31:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:38.107 21:31:00 -- nvmf/common.sh@120 -- # set +e 00:15:38.107 21:31:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:38.107 21:31:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:38.107 21:31:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:38.107 21:31:00 -- nvmf/common.sh@124 -- # set -e 00:15:38.107 21:31:00 -- nvmf/common.sh@125 -- # return 0 00:15:38.107 21:31:00 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:38.107 21:31:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:38.107 21:31:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:38.107 21:31:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:38.107 21:31:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:38.107 21:31:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:38.107 21:31:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.107 21:31:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.107 21:31:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.107 21:31:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:38.107 00:15:38.107 real 0m8.991s 00:15:38.107 user 0m1.928s 00:15:38.107 sys 0m5.098s 00:15:38.107 21:31:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:38.107 21:31:00 -- common/autotest_common.sh@10 -- # set +x 00:15:38.107 ************************************ 00:15:38.107 END TEST nvmf_multipath 00:15:38.107 ************************************ 00:15:38.107 21:31:00 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:38.107 21:31:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:38.107 21:31:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:38.107 21:31:00 -- common/autotest_common.sh@10 -- # set +x 00:15:38.107 ************************************ 00:15:38.108 START TEST nvmf_zcopy 00:15:38.108 ************************************ 00:15:38.108 21:31:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:38.108 * Looking for test storage... 00:15:38.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:38.108 21:31:00 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:38.108 21:31:00 -- nvmf/common.sh@7 -- # uname -s 00:15:38.108 21:31:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.108 21:31:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.108 21:31:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.108 21:31:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.108 21:31:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.108 21:31:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.108 21:31:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.108 21:31:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.108 21:31:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.108 21:31:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.108 21:31:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:38.108 21:31:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:38.108 21:31:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.108 21:31:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.108 21:31:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:38.108 21:31:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.108 21:31:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:38.108 21:31:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.108 21:31:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.108 21:31:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.108 21:31:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.108 21:31:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.108 21:31:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.108 21:31:00 -- paths/export.sh@5 -- # export PATH 00:15:38.108 21:31:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.108 21:31:00 -- nvmf/common.sh@47 -- # : 0 00:15:38.108 21:31:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:38.108 21:31:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:38.108 21:31:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.108 21:31:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.108 21:31:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.108 21:31:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:38.108 21:31:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:38.108 21:31:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:38.108 21:31:00 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:38.108 21:31:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:38.108 21:31:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:38.108 21:31:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:38.108 21:31:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:38.108 21:31:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:38.108 21:31:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.108 21:31:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.108 21:31:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.108 21:31:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:38.108 21:31:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:38.108 21:31:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:38.108 21:31:00 -- common/autotest_common.sh@10 -- # set +x 00:15:44.707 21:31:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:44.707 21:31:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:44.707 21:31:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:44.707 21:31:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:44.707 21:31:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:44.707 21:31:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:44.707 21:31:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:44.707 21:31:07 -- nvmf/common.sh@295 -- # net_devs=() 00:15:44.707 21:31:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:44.707 21:31:07 -- nvmf/common.sh@296 -- # e810=() 00:15:44.707 21:31:07 -- nvmf/common.sh@296 -- # local -ga e810 00:15:44.707 21:31:07 -- nvmf/common.sh@297 -- # x722=() 00:15:44.707 21:31:07 -- nvmf/common.sh@297 -- # local -ga x722 00:15:44.707 21:31:07 -- nvmf/common.sh@298 -- # mlx=() 00:15:44.707 21:31:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:44.707 21:31:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:44.707 21:31:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:44.707 21:31:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:44.707 21:31:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:44.707 21:31:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:44.707 21:31:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:44.707 21:31:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:44.707 21:31:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:44.707 21:31:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:44.707 21:31:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:44.707 21:31:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:44.707 21:31:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:44.707 21:31:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:44.708 21:31:07 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:44.708 21:31:07 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:44.708 21:31:07 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:44.708 21:31:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:44.708 21:31:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:44.708 21:31:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:44.708 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:44.708 21:31:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:44.708 21:31:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:44.708 21:31:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:44.708 21:31:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:44.708 21:31:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:44.708 21:31:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:44.708 21:31:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:44.708 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:44.708 21:31:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:44.708 21:31:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:44.708 21:31:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:44.708 21:31:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:44.708 21:31:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:44.708 21:31:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:44.708 21:31:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:44.708 21:31:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:44.708 21:31:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:44.708 21:31:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.708 21:31:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:44.708 21:31:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.708 21:31:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:44.708 Found net devices under 0000:af:00.0: cvl_0_0 00:15:44.708 21:31:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.708 21:31:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:44.708 21:31:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.708 21:31:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:44.708 21:31:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.708 21:31:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:44.708 Found net devices under 0000:af:00.1: cvl_0_1 00:15:44.708 21:31:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.708 21:31:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:44.708 21:31:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:44.708 21:31:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:44.708 21:31:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:44.708 21:31:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:44.708 21:31:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:44.708 21:31:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:44.708 21:31:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:44.708 21:31:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:44.708 21:31:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:44.708 21:31:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:44.708 21:31:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:44.708 21:31:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:44.708 21:31:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:44.708 21:31:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:44.708 21:31:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:44.708 21:31:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:44.708 21:31:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:44.708 21:31:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:44.708 21:31:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:44.708 21:31:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:44.708 21:31:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:44.708 21:31:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:44.708 21:31:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:44.708 21:31:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:44.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:44.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:15:44.708 00:15:44.708 --- 10.0.0.2 ping statistics --- 00:15:44.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.708 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:15:44.708 21:31:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:44.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:44.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:15:44.708 00:15:44.708 --- 10.0.0.1 ping statistics --- 00:15:44.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.708 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:15:44.708 21:31:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:44.708 21:31:07 -- nvmf/common.sh@411 -- # return 0 00:15:44.708 21:31:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:44.708 21:31:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:44.708 21:31:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:44.708 21:31:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:44.708 21:31:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:44.708 21:31:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:44.708 21:31:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:44.708 21:31:07 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:44.708 21:31:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:44.708 21:31:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:44.708 21:31:07 -- common/autotest_common.sh@10 -- # set +x 00:15:44.708 21:31:07 -- nvmf/common.sh@470 -- # nvmfpid=2840954 00:15:44.708 21:31:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:44.708 21:31:07 -- nvmf/common.sh@471 -- # waitforlisten 2840954 00:15:44.708 21:31:07 -- common/autotest_common.sh@817 -- # '[' -z 2840954 ']' 00:15:44.708 21:31:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.708 21:31:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:44.708 21:31:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.708 21:31:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:44.708 21:31:07 -- common/autotest_common.sh@10 -- # set +x 00:15:44.708 [2024-04-24 21:31:07.554407] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:15:44.708 [2024-04-24 21:31:07.554459] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.708 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.968 [2024-04-24 21:31:07.627801] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.968 [2024-04-24 21:31:07.695235] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.968 [2024-04-24 21:31:07.695277] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.968 [2024-04-24 21:31:07.695286] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.968 [2024-04-24 21:31:07.695295] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.968 [2024-04-24 21:31:07.695302] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.968 [2024-04-24 21:31:07.695325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.537 21:31:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:45.537 21:31:08 -- common/autotest_common.sh@850 -- # return 0 00:15:45.537 21:31:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:45.537 21:31:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:45.537 21:31:08 -- common/autotest_common.sh@10 -- # set +x 00:15:45.537 21:31:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:45.537 21:31:08 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:45.537 21:31:08 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:45.537 21:31:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.537 21:31:08 -- common/autotest_common.sh@10 -- # set +x 00:15:45.537 [2024-04-24 21:31:08.390089] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:45.537 21:31:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.537 21:31:08 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:45.537 21:31:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.537 21:31:08 -- common/autotest_common.sh@10 -- # set +x 00:15:45.537 21:31:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.537 21:31:08 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.537 21:31:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.537 21:31:08 -- common/autotest_common.sh@10 -- # set +x 00:15:45.537 [2024-04-24 21:31:08.406255] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.537 21:31:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.537 21:31:08 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:45.537 21:31:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.537 21:31:08 -- common/autotest_common.sh@10 -- # set +x 00:15:45.537 21:31:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.537 21:31:08 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:45.537 21:31:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.537 21:31:08 -- common/autotest_common.sh@10 -- # set +x 00:15:45.797 malloc0 00:15:45.797 21:31:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.797 21:31:08 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:45.797 21:31:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.797 21:31:08 -- common/autotest_common.sh@10 -- # set +x 00:15:45.797 21:31:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.797 21:31:08 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:45.797 21:31:08 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:45.797 21:31:08 -- nvmf/common.sh@521 -- # config=() 00:15:45.797 21:31:08 -- nvmf/common.sh@521 -- # local subsystem config 00:15:45.797 21:31:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:45.797 21:31:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:45.797 { 00:15:45.797 "params": { 00:15:45.797 "name": "Nvme$subsystem", 00:15:45.797 "trtype": "$TEST_TRANSPORT", 00:15:45.797 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:45.797 "adrfam": "ipv4", 00:15:45.797 "trsvcid": "$NVMF_PORT", 00:15:45.797 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:45.797 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:45.797 "hdgst": ${hdgst:-false}, 00:15:45.797 "ddgst": ${ddgst:-false} 00:15:45.797 }, 00:15:45.797 "method": "bdev_nvme_attach_controller" 00:15:45.797 } 00:15:45.797 EOF 00:15:45.797 )") 00:15:45.797 21:31:08 -- nvmf/common.sh@543 -- # cat 00:15:45.797 21:31:08 -- nvmf/common.sh@545 -- # jq . 00:15:45.797 21:31:08 -- nvmf/common.sh@546 -- # IFS=, 00:15:45.797 21:31:08 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:45.797 "params": { 00:15:45.797 "name": "Nvme1", 00:15:45.797 "trtype": "tcp", 00:15:45.797 "traddr": "10.0.0.2", 00:15:45.797 "adrfam": "ipv4", 00:15:45.797 "trsvcid": "4420", 00:15:45.797 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:45.797 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:45.797 "hdgst": false, 00:15:45.797 "ddgst": false 00:15:45.797 }, 00:15:45.797 "method": "bdev_nvme_attach_controller" 00:15:45.797 }' 00:15:45.797 [2024-04-24 21:31:08.488408] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:15:45.797 [2024-04-24 21:31:08.488462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2841021 ] 00:15:45.797 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.797 [2024-04-24 21:31:08.558094] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.797 [2024-04-24 21:31:08.632340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.057 Running I/O for 10 seconds... 00:15:56.039 00:15:56.039 Latency(us) 00:15:56.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.039 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:56.039 Verification LBA range: start 0x0 length 0x1000 00:15:56.039 Nvme1n1 : 10.01 8251.63 64.47 0.00 0.00 15474.08 1336.93 41523.61 00:15:56.039 =================================================================================================================== 00:15:56.039 Total : 8251.63 64.47 0.00 0.00 15474.08 1336.93 41523.61 00:15:56.298 21:31:19 -- target/zcopy.sh@39 -- # perfpid=2842838 00:15:56.298 21:31:19 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:56.298 21:31:19 -- common/autotest_common.sh@10 -- # set +x 00:15:56.298 21:31:19 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:56.298 21:31:19 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:56.298 21:31:19 -- nvmf/common.sh@521 -- # config=() 00:15:56.298 21:31:19 -- nvmf/common.sh@521 -- # local subsystem config 00:15:56.298 21:31:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:56.298 21:31:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:56.298 { 00:15:56.298 "params": { 00:15:56.298 "name": "Nvme$subsystem", 00:15:56.298 "trtype": "$TEST_TRANSPORT", 00:15:56.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:56.298 "adrfam": "ipv4", 00:15:56.298 "trsvcid": "$NVMF_PORT", 00:15:56.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:56.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:56.298 "hdgst": ${hdgst:-false}, 00:15:56.298 "ddgst": ${ddgst:-false} 00:15:56.298 }, 00:15:56.298 "method": "bdev_nvme_attach_controller" 00:15:56.298 } 00:15:56.298 EOF 00:15:56.298 )") 00:15:56.298 [2024-04-24 21:31:19.083146] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.298 [2024-04-24 21:31:19.083180] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.298 21:31:19 -- nvmf/common.sh@543 -- # cat 00:15:56.298 21:31:19 -- nvmf/common.sh@545 -- # jq . 00:15:56.298 21:31:19 -- nvmf/common.sh@546 -- # IFS=, 00:15:56.298 21:31:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:56.298 "params": { 00:15:56.298 "name": "Nvme1", 00:15:56.298 "trtype": "tcp", 00:15:56.298 "traddr": "10.0.0.2", 00:15:56.298 "adrfam": "ipv4", 00:15:56.298 "trsvcid": "4420", 00:15:56.299 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:56.299 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:56.299 "hdgst": false, 00:15:56.299 "ddgst": false 00:15:56.299 }, 00:15:56.299 "method": "bdev_nvme_attach_controller" 00:15:56.299 }' 00:15:56.299 [2024-04-24 21:31:19.095140] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.299 [2024-04-24 21:31:19.095155] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.299 [2024-04-24 21:31:19.107165] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.299 [2024-04-24 21:31:19.107177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.299 [2024-04-24 21:31:19.119198] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.299 [2024-04-24 21:31:19.119209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.299 [2024-04-24 21:31:19.120264] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:15:56.299 [2024-04-24 21:31:19.120313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2842838 ] 00:15:56.299 [2024-04-24 21:31:19.131230] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.299 [2024-04-24 21:31:19.131241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.299 [2024-04-24 21:31:19.143261] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.299 [2024-04-24 21:31:19.143272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.299 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.299 [2024-04-24 21:31:19.155293] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.299 [2024-04-24 21:31:19.155304] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.299 [2024-04-24 21:31:19.167328] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.299 [2024-04-24 21:31:19.167339] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.299 [2024-04-24 21:31:19.179356] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.299 [2024-04-24 21:31:19.179367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.190435] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.558 [2024-04-24 21:31:19.191388] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.191400] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.203420] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.203433] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.215454] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.215465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.227490] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.227511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.239535] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.239549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.251544] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.251555] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.260353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.558 [2024-04-24 21:31:19.263583] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.263596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.275625] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.275645] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.287655] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.287670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.299685] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.299698] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.311716] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.311729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.323750] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.323769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.335785] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.335800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.347833] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.347854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.359848] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.359863] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.371879] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.371895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.383912] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.383926] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.395946] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.395963] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.407981] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.407996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.420016] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.420032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.432049] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.432063] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.558 [2024-04-24 21:31:19.444079] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.558 [2024-04-24 21:31:19.444092] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.817 [2024-04-24 21:31:19.456108] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.817 [2024-04-24 21:31:19.456120] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.817 [2024-04-24 21:31:19.468145] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.817 [2024-04-24 21:31:19.468160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.817 [2024-04-24 21:31:19.480175] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.817 [2024-04-24 21:31:19.480187] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.817 [2024-04-24 21:31:19.492206] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.817 [2024-04-24 21:31:19.492217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.817 [2024-04-24 21:31:19.504242] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.817 [2024-04-24 21:31:19.504253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.817 [2024-04-24 21:31:19.516278] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.817 [2024-04-24 21:31:19.516292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.817 [2024-04-24 21:31:19.528311] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.817 [2024-04-24 21:31:19.528323] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.817 [2024-04-24 21:31:19.540344] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.817 [2024-04-24 21:31:19.540355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.817 [2024-04-24 21:31:19.552377] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.817 [2024-04-24 21:31:19.552392] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.817 [2024-04-24 21:31:19.564417] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.817 [2024-04-24 21:31:19.564437] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.817 Running I/O for 5 seconds... 00:15:56.817 [2024-04-24 21:31:19.576443] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.817 [2024-04-24 21:31:19.576464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.817 [2024-04-24 21:31:19.605128] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.817 [2024-04-24 21:31:19.605149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.817 [2024-04-24 21:31:19.621119] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.817 [2024-04-24 21:31:19.621140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.817 [2024-04-24 21:31:19.634582] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.817 [2024-04-24 21:31:19.634603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.817 [2024-04-24 21:31:19.648466] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.817 [2024-04-24 21:31:19.648507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.817 [2024-04-24 21:31:19.662237] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.817 [2024-04-24 21:31:19.662258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.817 [2024-04-24 21:31:19.675537] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.817 [2024-04-24 21:31:19.675558] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.817 [2024-04-24 21:31:19.689241] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.817 [2024-04-24 21:31:19.689261] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.817 [2024-04-24 21:31:19.703068] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.817 [2024-04-24 21:31:19.703089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.076 [2024-04-24 21:31:19.716628] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.077 [2024-04-24 21:31:19.716648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.077 [2024-04-24 21:31:19.730376] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.077 [2024-04-24 21:31:19.730397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.077 [2024-04-24 21:31:19.741702] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.077 [2024-04-24 21:31:19.741722] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.077 [2024-04-24 21:31:19.755629] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.077 [2024-04-24 21:31:19.755648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.077 [2024-04-24 21:31:19.769457] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.077 [2024-04-24 21:31:19.769477] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.077 [2024-04-24 21:31:19.780909] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.077 [2024-04-24 21:31:19.780929] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.077 [2024-04-24 21:31:19.795006] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.077 [2024-04-24 21:31:19.795026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.077 [2024-04-24 21:31:19.808809] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.077 [2024-04-24 21:31:19.808828] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.077 [2024-04-24 21:31:19.822699] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.077 [2024-04-24 21:31:19.822719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.077 [2024-04-24 21:31:19.836239] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.077 [2024-04-24 21:31:19.836259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.077 [2024-04-24 21:31:19.849808] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.077 [2024-04-24 21:31:19.849828] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.077 [2024-04-24 21:31:19.863860] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.077 [2024-04-24 21:31:19.863880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.077 [2024-04-24 21:31:19.877592] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.077 [2024-04-24 21:31:19.877612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.077 [2024-04-24 21:31:19.891515] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.077 [2024-04-24 21:31:19.891535] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.077 [2024-04-24 21:31:19.903641] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.077 [2024-04-24 21:31:19.903661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.077 [2024-04-24 21:31:19.917820] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.077 [2024-04-24 21:31:19.917840] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.077 [2024-04-24 21:31:19.931143] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.077 [2024-04-24 21:31:19.931163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.077 [2024-04-24 21:31:19.945076] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.077 [2024-04-24 21:31:19.945096] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.077 [2024-04-24 21:31:19.958947] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.077 [2024-04-24 21:31:19.958967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.336 [2024-04-24 21:31:19.972703] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.336 [2024-04-24 21:31:19.972723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.336 [2024-04-24 21:31:19.986327] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.336 [2024-04-24 21:31:19.986347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.336 [2024-04-24 21:31:20.000017] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.336 [2024-04-24 21:31:20.000037] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.336 [2024-04-24 21:31:20.007190] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.336 [2024-04-24 21:31:20.007209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.336 [2024-04-24 21:31:20.017854] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.336 [2024-04-24 21:31:20.017875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.336 [2024-04-24 21:31:20.026551] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.336 [2024-04-24 21:31:20.026571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.336 [2024-04-24 21:31:20.034643] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.336 [2024-04-24 21:31:20.034662] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.336 [2024-04-24 21:31:20.049652] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.336 [2024-04-24 21:31:20.049673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.336 [2024-04-24 21:31:20.056897] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.336 [2024-04-24 21:31:20.056918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.336 [2024-04-24 21:31:20.069314] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.336 [2024-04-24 21:31:20.069334] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.336 [2024-04-24 21:31:20.078720] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.336 [2024-04-24 21:31:20.078742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.336 [2024-04-24 21:31:20.087828] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.336 [2024-04-24 21:31:20.087848] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.336 [2024-04-24 21:31:20.096273] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.336 [2024-04-24 21:31:20.096294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.336 [2024-04-24 21:31:20.105200] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.336 [2024-04-24 21:31:20.105220] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.336 [2024-04-24 21:31:20.119828] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.337 [2024-04-24 21:31:20.119849] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.337 [2024-04-24 21:31:20.126945] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.337 [2024-04-24 21:31:20.126965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.337 [2024-04-24 21:31:20.135036] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.337 [2024-04-24 21:31:20.135055] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.337 [2024-04-24 21:31:20.142041] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.337 [2024-04-24 21:31:20.142061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.337 [2024-04-24 21:31:20.152532] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.337 [2024-04-24 21:31:20.152551] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.337 [2024-04-24 21:31:20.161148] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.337 [2024-04-24 21:31:20.161168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.337 [2024-04-24 21:31:20.170163] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.337 [2024-04-24 21:31:20.170182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.337 [2024-04-24 21:31:20.179021] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.337 [2024-04-24 21:31:20.179040] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.337 [2024-04-24 21:31:20.187596] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.337 [2024-04-24 21:31:20.187615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.337 [2024-04-24 21:31:20.196691] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.337 [2024-04-24 21:31:20.196711] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.337 [2024-04-24 21:31:20.205540] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.337 [2024-04-24 21:31:20.205558] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.337 [2024-04-24 21:31:20.214410] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.337 [2024-04-24 21:31:20.214429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.337 [2024-04-24 21:31:20.223024] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.337 [2024-04-24 21:31:20.223049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.596 [2024-04-24 21:31:20.231490] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.596 [2024-04-24 21:31:20.231509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.596 [2024-04-24 21:31:20.239458] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.596 [2024-04-24 21:31:20.239478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.248938] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.248957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.258107] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.258127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.266341] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.266361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.275267] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.275286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.283665] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.283685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.292232] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.292253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.301236] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.301255] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.310090] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.310109] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.319341] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.319360] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.328813] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.328833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.336955] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.336975] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.345766] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.345785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.354824] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.354843] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.363526] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.363546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.371948] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.371966] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.381138] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.381157] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.389426] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.389449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.397645] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.397664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.406048] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.406067] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.414423] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.414442] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.421108] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.421127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.432051] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.432070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.440825] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.440844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.449343] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.449361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.456130] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.456152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.466961] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.466982] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.597 [2024-04-24 21:31:20.473728] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.597 [2024-04-24 21:31:20.473748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.484312] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.484333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.493028] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.493047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.502017] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.502037] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.511065] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.511085] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.520063] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.520082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.528596] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.528617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.536859] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.536881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.545965] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.545985] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.555070] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.555093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.563476] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.563496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.574054] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.574073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.583948] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.583968] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.592538] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.592557] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.601307] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.601326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.611895] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.611914] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.622118] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.622142] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.630650] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.630669] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.639570] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.639589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.648683] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.648701] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.657001] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.657020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.665510] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.665529] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.713234] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.713252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.727230] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.727250] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.857 [2024-04-24 21:31:20.737422] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.857 [2024-04-24 21:31:20.737440] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.116 [2024-04-24 21:31:20.748402] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.116 [2024-04-24 21:31:20.748422] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.116 [2024-04-24 21:31:20.757408] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.116 [2024-04-24 21:31:20.757427] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.116 [2024-04-24 21:31:20.766031] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.116 [2024-04-24 21:31:20.766050] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.116 [2024-04-24 21:31:20.774570] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.116 [2024-04-24 21:31:20.774593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.116 [2024-04-24 21:31:20.784057] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.116 [2024-04-24 21:31:20.784076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.116 [2024-04-24 21:31:20.796954] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.116 [2024-04-24 21:31:20.796973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.116 [2024-04-24 21:31:20.803825] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.116 [2024-04-24 21:31:20.803844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.116 [2024-04-24 21:31:20.812557] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.116 [2024-04-24 21:31:20.812577] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.116 [2024-04-24 21:31:20.819371] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.116 [2024-04-24 21:31:20.819391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.116 [2024-04-24 21:31:20.829265] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.116 [2024-04-24 21:31:20.829285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.116 [2024-04-24 21:31:20.837513] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.116 [2024-04-24 21:31:20.837532] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.116 [2024-04-24 21:31:20.845991] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.116 [2024-04-24 21:31:20.846010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.116 [2024-04-24 21:31:20.854415] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.116 [2024-04-24 21:31:20.854435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.116 [2024-04-24 21:31:20.863259] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.116 [2024-04-24 21:31:20.863279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.117 [2024-04-24 21:31:20.872063] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.117 [2024-04-24 21:31:20.872082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.117 [2024-04-24 21:31:20.880706] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.117 [2024-04-24 21:31:20.880725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.117 [2024-04-24 21:31:20.887435] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.117 [2024-04-24 21:31:20.887460] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.117 [2024-04-24 21:31:20.899014] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.117 [2024-04-24 21:31:20.899034] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.117 [2024-04-24 21:31:20.907604] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.117 [2024-04-24 21:31:20.907623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.117 [2024-04-24 21:31:20.916720] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.117 [2024-04-24 21:31:20.916738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.117 [2024-04-24 21:31:20.925088] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.117 [2024-04-24 21:31:20.925107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.117 [2024-04-24 21:31:20.933386] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.117 [2024-04-24 21:31:20.933413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.117 [2024-04-24 21:31:20.942033] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.117 [2024-04-24 21:31:20.942053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.117 [2024-04-24 21:31:20.950691] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.117 [2024-04-24 21:31:20.950710] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.117 [2024-04-24 21:31:20.958305] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.117 [2024-04-24 21:31:20.958324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.117 [2024-04-24 21:31:20.967831] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.117 [2024-04-24 21:31:20.967851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.117 [2024-04-24 21:31:20.974789] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.117 [2024-04-24 21:31:20.974809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.117 [2024-04-24 21:31:20.984236] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.117 [2024-04-24 21:31:20.984256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.117 [2024-04-24 21:31:20.992462] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.117 [2024-04-24 21:31:20.992481] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.117 [2024-04-24 21:31:21.000841] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.117 [2024-04-24 21:31:21.000860] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.009597] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.009616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.017879] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.017898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.026211] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.026229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.034412] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.034431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.042747] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.042766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.051887] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.051907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.060416] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.060434] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.069475] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.069495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.078296] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.078316] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.087110] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.087129] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.096457] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.096491] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.104886] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.104905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.113842] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.113861] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.122606] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.122636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.131770] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.131790] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.140314] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.140333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.149579] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.149599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.158713] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.158732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.167610] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.167629] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.175865] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.175884] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.183957] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.183976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.192345] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.192364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.200974] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.200993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.209408] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.209426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.217881] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.217900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.226815] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.226833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.234978] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.234997] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.243982] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.244001] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.252687] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.252705] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.412 [2024-04-24 21:31:21.261771] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.412 [2024-04-24 21:31:21.261791] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.270323] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.270342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.279471] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.279490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.288160] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.288178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.297281] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.297300] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.305935] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.305954] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.315134] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.315153] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.324210] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.324229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.333140] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.333159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.342069] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.342087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.350446] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.350471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.359351] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.359370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.368249] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.368268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.376373] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.376392] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.385502] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.385521] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.392384] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.392403] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.402473] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.402508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.411318] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.411338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.420092] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.420112] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.428954] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.428973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.438115] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.438139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.446672] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.446692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.455837] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.455856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.466139] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.466158] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.476937] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.476957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.485828] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.485848] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.494097] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.494116] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.500739] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.500758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.510841] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.510861] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.519583] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.519603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.528748] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.528767] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.537220] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.537239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.546364] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.546387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.672 [2024-04-24 21:31:21.555144] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.672 [2024-04-24 21:31:21.555163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.563528] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.563547] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.572424] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.572443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.580589] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.580608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.589564] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.589583] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.598058] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.598082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.606987] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.607007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.615483] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.615502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.624309] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.624328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.633067] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.633086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.641533] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.641551] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.650199] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.650218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.659255] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.659275] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.668296] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.668317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.677342] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.677362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.686189] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.686208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.694556] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.694575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.703093] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.703112] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.711258] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.711277] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.719539] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.719558] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.728974] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.728993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.737688] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.737707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.745168] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.745186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.754752] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.754771] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.763149] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.763172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.931 [2024-04-24 21:31:21.771992] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.931 [2024-04-24 21:31:21.772010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.932 [2024-04-24 21:31:21.780708] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.932 [2024-04-24 21:31:21.780726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.932 [2024-04-24 21:31:21.789628] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.932 [2024-04-24 21:31:21.789648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.932 [2024-04-24 21:31:21.797573] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.932 [2024-04-24 21:31:21.797593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.932 [2024-04-24 21:31:21.806458] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.932 [2024-04-24 21:31:21.806492] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.932 [2024-04-24 21:31:21.814987] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.932 [2024-04-24 21:31:21.815006] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.823421] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.823440] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.832425] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.832444] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.840491] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.840510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.848842] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.848860] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.857349] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.857368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.866311] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.866330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.875180] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.875199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.884136] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.884154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.892850] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.892870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.901827] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.901846] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.910494] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.910513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.918012] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.918031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.928764] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.928787] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.937574] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.937593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.946355] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.946373] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.954958] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.954976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.963421] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.963441] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.971998] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.972017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.980532] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.980550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.988905] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.988924] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:21.997536] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:21.997555] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:22.006206] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:22.006225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:22.014113] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:22.014132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:22.024021] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:22.024039] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:22.032475] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:22.032493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:22.041619] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:22.041637] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:22.049958] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:22.049976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:22.058623] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.191 [2024-04-24 21:31:22.058642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.191 [2024-04-24 21:31:22.067340] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.192 [2024-04-24 21:31:22.067358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.192 [2024-04-24 21:31:22.075962] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.192 [2024-04-24 21:31:22.075980] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.451 [2024-04-24 21:31:22.084729] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.451 [2024-04-24 21:31:22.084747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.451 [2024-04-24 21:31:22.093271] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.451 [2024-04-24 21:31:22.093293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.451 [2024-04-24 21:31:22.101935] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.451 [2024-04-24 21:31:22.101954] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.451 [2024-04-24 21:31:22.108632] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.451 [2024-04-24 21:31:22.108651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.451 [2024-04-24 21:31:22.119717] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.451 [2024-04-24 21:31:22.119737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.451 [2024-04-24 21:31:22.128640] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.451 [2024-04-24 21:31:22.128659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.451 [2024-04-24 21:31:22.137706] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.451 [2024-04-24 21:31:22.137725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.451 [2024-04-24 21:31:22.146209] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.451 [2024-04-24 21:31:22.146228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.451 [2024-04-24 21:31:22.154884] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.451 [2024-04-24 21:31:22.154902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.451 [2024-04-24 21:31:22.163202] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.451 [2024-04-24 21:31:22.163222] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.452 [2024-04-24 21:31:22.172074] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.452 [2024-04-24 21:31:22.172093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.452 [2024-04-24 21:31:22.181212] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.452 [2024-04-24 21:31:22.181231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.452 [2024-04-24 21:31:22.189502] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.452 [2024-04-24 21:31:22.189521] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.452 [2024-04-24 21:31:22.197707] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.452 [2024-04-24 21:31:22.197726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.452 [2024-04-24 21:31:22.205809] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.452 [2024-04-24 21:31:22.205828] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.452 [2024-04-24 21:31:22.214759] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.452 [2024-04-24 21:31:22.214778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.452 [2024-04-24 21:31:22.223796] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.452 [2024-04-24 21:31:22.223815] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.452 [2024-04-24 21:31:22.235933] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.452 [2024-04-24 21:31:22.235952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.452 [2024-04-24 21:31:22.246465] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.452 [2024-04-24 21:31:22.246483] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.452 [2024-04-24 21:31:22.253816] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.452 [2024-04-24 21:31:22.253835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.452 [2024-04-24 21:31:22.264053] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.452 [2024-04-24 21:31:22.264075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.452 [2024-04-24 21:31:22.272374] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.452 [2024-04-24 21:31:22.272393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.452 [2024-04-24 21:31:22.284343] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.452 [2024-04-24 21:31:22.284362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.452 [2024-04-24 21:31:22.294707] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.452 [2024-04-24 21:31:22.294726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.452 [2024-04-24 21:31:22.302196] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.452 [2024-04-24 21:31:22.302215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.452 [2024-04-24 21:31:22.309875] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.452 [2024-04-24 21:31:22.309895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.452 [2024-04-24 21:31:22.325044] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.452 [2024-04-24 21:31:22.325063] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.452 [2024-04-24 21:31:22.334095] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.452 [2024-04-24 21:31:22.334114] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.344247] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.344265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.352718] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.352736] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.363038] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.363057] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.372647] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.372666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.382329] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.382349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.390524] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.390544] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.399785] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.399804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.408135] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.408153] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.416392] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.416410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.423391] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.423410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.433491] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.433510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.442346] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.442365] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.450669] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.450689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.459263] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.459282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.468249] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.468269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.477093] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.477112] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.488151] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.488169] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.498629] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.498648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.506957] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.506976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.517621] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.517641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.527432] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.527459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.534953] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.534972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.544917] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.544937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.553981] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.554001] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.562663] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.562684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.571582] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.571602] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.580196] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.580215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.588725] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.588745] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.712 [2024-04-24 21:31:22.597181] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.712 [2024-04-24 21:31:22.597200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.972 [2024-04-24 21:31:22.605956] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.972 [2024-04-24 21:31:22.605976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.972 [2024-04-24 21:31:22.614873] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.972 [2024-04-24 21:31:22.614892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.972 [2024-04-24 21:31:22.623220] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.972 [2024-04-24 21:31:22.623240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.972 [2024-04-24 21:31:22.632277] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.972 [2024-04-24 21:31:22.632298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.972 [2024-04-24 21:31:22.640759] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.972 [2024-04-24 21:31:22.640779] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.972 [2024-04-24 21:31:22.649251] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.972 [2024-04-24 21:31:22.649270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.972 [2024-04-24 21:31:22.656216] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.972 [2024-04-24 21:31:22.656235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.972 [2024-04-24 21:31:22.666635] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.972 [2024-04-24 21:31:22.666655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.972 [2024-04-24 21:31:22.675547] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.972 [2024-04-24 21:31:22.675566] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.683831] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.683849] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.692734] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.692753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.701365] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.701385] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.710515] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.710534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.719228] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.719248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.727848] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.727868] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.736723] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.736745] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.745547] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.745570] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.754572] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.754593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.763182] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.763202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.771691] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.771710] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.780513] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.780534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.789271] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.789290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.797982] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.798003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.807013] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.807032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.815563] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.815584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.824545] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.824565] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.833079] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.833099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.841232] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.841252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.849749] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.849769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.973 [2024-04-24 21:31:22.858741] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.973 [2024-04-24 21:31:22.858761] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:22.867230] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:22.867249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:22.875756] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:22.875775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:22.884066] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:22.884085] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:22.892599] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:22.892619] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:22.901350] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:22.901369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:22.910058] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:22.910077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:22.918076] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:22.918095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:22.927091] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:22.927111] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:22.935415] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:22.935438] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:22.943918] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:22.943938] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:22.952582] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:22.952611] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:22.961719] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:22.961738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:22.970225] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:22.970244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:22.979066] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:22.979085] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:22.987697] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:22.987716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:22.996468] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:22.996487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:23.005240] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:23.005260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:23.014322] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:23.014341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:23.023074] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:23.023094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:23.031975] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:23.031994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:23.040410] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:23.040429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:23.048749] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:23.048772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:23.057280] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:23.057298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:23.065890] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:23.065909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:23.074838] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:23.074857] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:23.083642] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:23.083661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:23.092401] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:23.092420] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:23.103795] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:23.103818] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.233 [2024-04-24 21:31:23.115137] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.233 [2024-04-24 21:31:23.115156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.125487] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.125506] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.135588] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.135607] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.143981] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.144000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.150785] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.150804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.160607] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.160626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.169654] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.169673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.178082] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.178101] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.186446] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.186471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.195235] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.195254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.201873] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.201892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.216197] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.216218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.226424] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.226444] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.233672] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.233690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.243035] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.243054] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.249653] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.249672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.260718] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.260737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.269982] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.270001] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.278366] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.278389] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.286645] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.286663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.295045] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.295064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.303251] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.303270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.311943] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.311962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.320992] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.321012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.327765] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.327784] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.338908] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.338927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.347617] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.347636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.356245] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.356265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.493 [2024-04-24 21:31:23.365185] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.493 [2024-04-24 21:31:23.365205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.494 [2024-04-24 21:31:23.373519] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.494 [2024-04-24 21:31:23.373538] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.753 [2024-04-24 21:31:23.381834] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.753 [2024-04-24 21:31:23.381852] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.753 [2024-04-24 21:31:23.390200] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.753 [2024-04-24 21:31:23.390219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.753 [2024-04-24 21:31:23.399075] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.753 [2024-04-24 21:31:23.399094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.753 [2024-04-24 21:31:23.407734] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.753 [2024-04-24 21:31:23.407754] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.753 [2024-04-24 21:31:23.416729] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.753 [2024-04-24 21:31:23.416748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.753 [2024-04-24 21:31:23.425676] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.753 [2024-04-24 21:31:23.425695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.753 [2024-04-24 21:31:23.442877] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.753 [2024-04-24 21:31:23.442897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.753 [2024-04-24 21:31:23.451445] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.753 [2024-04-24 21:31:23.451475] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.753 [2024-04-24 21:31:23.460605] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.753 [2024-04-24 21:31:23.460625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.753 [2024-04-24 21:31:23.469422] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.753 [2024-04-24 21:31:23.469440] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.753 [2024-04-24 21:31:23.478349] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.753 [2024-04-24 21:31:23.478368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.753 [2024-04-24 21:31:23.487373] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.753 [2024-04-24 21:31:23.487392] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.753 [2024-04-24 21:31:23.495911] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.754 [2024-04-24 21:31:23.495929] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.754 [2024-04-24 21:31:23.504697] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.754 [2024-04-24 21:31:23.504716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.754 [2024-04-24 21:31:23.513488] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.754 [2024-04-24 21:31:23.513507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.754 [2024-04-24 21:31:23.522507] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.754 [2024-04-24 21:31:23.522526] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.754 [2024-04-24 21:31:23.531378] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.754 [2024-04-24 21:31:23.531397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.754 [2024-04-24 21:31:23.538239] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.754 [2024-04-24 21:31:23.538257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.754 [2024-04-24 21:31:23.548002] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.754 [2024-04-24 21:31:23.548021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.754 [2024-04-24 21:31:23.556712] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.754 [2024-04-24 21:31:23.556731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.754 [2024-04-24 21:31:23.565070] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.754 [2024-04-24 21:31:23.565090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.754 [2024-04-24 21:31:23.574331] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.754 [2024-04-24 21:31:23.574350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.754 [2024-04-24 21:31:23.583042] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.754 [2024-04-24 21:31:23.583061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.754 [2024-04-24 21:31:23.592221] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.754 [2024-04-24 21:31:23.592240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.754 [2024-04-24 21:31:23.601527] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.754 [2024-04-24 21:31:23.601546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.754 [2024-04-24 21:31:23.610020] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.754 [2024-04-24 21:31:23.610039] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.754 [2024-04-24 21:31:23.618411] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.754 [2024-04-24 21:31:23.618435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.754 [2024-04-24 21:31:23.627090] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.754 [2024-04-24 21:31:23.627110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.754 [2024-04-24 21:31:23.635605] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.754 [2024-04-24 21:31:23.635624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.013 [2024-04-24 21:31:23.644669] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.013 [2024-04-24 21:31:23.644688] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.013 [2024-04-24 21:31:23.653671] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.013 [2024-04-24 21:31:23.653690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.013 [2024-04-24 21:31:23.663126] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.013 [2024-04-24 21:31:23.663145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.013 [2024-04-24 21:31:23.672224] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.013 [2024-04-24 21:31:23.672243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.013 [2024-04-24 21:31:23.680388] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.013 [2024-04-24 21:31:23.680407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.013 [2024-04-24 21:31:23.689352] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.013 [2024-04-24 21:31:23.689371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.013 [2024-04-24 21:31:23.698341] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.013 [2024-04-24 21:31:23.698360] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.013 [2024-04-24 21:31:23.707162] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.013 [2024-04-24 21:31:23.707181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.013 [2024-04-24 21:31:23.716013] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.013 [2024-04-24 21:31:23.716032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.725053] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.725073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.734071] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.734091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.742712] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.742732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.751830] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.751849] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.758483] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.758503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.768421] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.768440] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.776986] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.777005] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.786029] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.786048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.794448] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.794475] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.803417] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.803436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.811685] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.811704] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.820671] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.820690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.829566] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.829585] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.837581] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.837601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.847055] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.847075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.856080] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.856100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.864856] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.864875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.873131] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.873150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.881612] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.881631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.889947] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.889967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.014 [2024-04-24 21:31:23.898110] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.014 [2024-04-24 21:31:23.898130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:23.906632] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:23.906651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:23.915665] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:23.915685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:23.924028] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:23.924048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:23.932194] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:23.932213] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:23.940946] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:23.940965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:23.949304] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:23.949322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:23.958267] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:23.958286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:23.967545] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:23.967567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:23.976300] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:23.976321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:23.984830] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:23.984849] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:23.993810] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:23.993829] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:24.002807] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:24.002826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:24.011153] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:24.011171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:24.019874] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:24.019892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:24.028315] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:24.028334] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:24.036704] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:24.036723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:24.045030] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:24.045049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:24.053624] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:24.053643] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:24.062761] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:24.062780] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:24.070355] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:24.070375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:24.077886] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:24.077904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:24.088139] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:24.088158] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:24.095182] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:24.095202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:24.106086] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:24.106105] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:24.114826] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:24.114846] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:24.123186] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:24.123206] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:24.131742] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.274 [2024-04-24 21:31:24.131761] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.274 [2024-04-24 21:31:24.140556] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.275 [2024-04-24 21:31:24.140576] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.275 [2024-04-24 21:31:24.148823] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.275 [2024-04-24 21:31:24.148843] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.275 [2024-04-24 21:31:24.157622] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.275 [2024-04-24 21:31:24.157642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.166020] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.166042] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.174438] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.174466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.182886] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.182904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.191886] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.191905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.200357] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.200376] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.208936] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.208955] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.217194] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.217213] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.226579] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.226599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.235307] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.235327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.243854] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.243874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.252624] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.252643] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.261730] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.261750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.270264] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.270287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.278856] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.278876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.287445] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.287472] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.296326] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.296346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.304616] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.304635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.313697] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.313717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.322001] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.322020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.331219] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.331238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.339307] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.339327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.347706] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.347725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.356297] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.356316] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.364789] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.364809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.373311] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.373329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.382222] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.382241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.390621] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.390640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.398942] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.398961] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.407956] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.407975] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.535 [2024-04-24 21:31:24.416249] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.535 [2024-04-24 21:31:24.416268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.424864] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.424884] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.433548] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.433571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.442278] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.442297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.450992] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.451010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.459618] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.459638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.468024] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.468043] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.476438] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.476465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.485025] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.485044] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.494365] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.494384] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.503056] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.503074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.511400] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.511418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.519481] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.519499] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.528256] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.528275] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.536654] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.536673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.545572] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.545590] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.554240] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.554259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.563128] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.563147] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.571891] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.571909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.580825] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.580845] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.588853] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.588872] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 00:16:01.795 Latency(us) 00:16:01.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.795 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:01.795 Nvme1n1 : 5.01 16407.96 128.19 0.00 0.00 7794.76 2280.65 55364.81 00:16:01.795 =================================================================================================================== 00:16:01.795 Total : 16407.96 128.19 0.00 0.00 7794.76 2280.65 55364.81 00:16:01.795 [2024-04-24 21:31:24.595398] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.595416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.603416] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.603432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.611438] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.611455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.619466] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.619496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.627495] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.627512] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.635509] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.635526] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.643524] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.643538] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.651543] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.651556] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.659564] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.659577] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.667587] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.667600] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.795 [2024-04-24 21:31:24.675605] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.795 [2024-04-24 21:31:24.675617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.055 [2024-04-24 21:31:24.683628] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.055 [2024-04-24 21:31:24.683640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.055 [2024-04-24 21:31:24.691648] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.055 [2024-04-24 21:31:24.691662] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.055 [2024-04-24 21:31:24.699675] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.055 [2024-04-24 21:31:24.699687] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.055 [2024-04-24 21:31:24.707694] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.055 [2024-04-24 21:31:24.707705] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.055 [2024-04-24 21:31:24.715715] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.055 [2024-04-24 21:31:24.715726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.055 [2024-04-24 21:31:24.723735] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.055 [2024-04-24 21:31:24.723751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.055 [2024-04-24 21:31:24.731761] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.055 [2024-04-24 21:31:24.731774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.055 [2024-04-24 21:31:24.739781] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.055 [2024-04-24 21:31:24.739793] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.055 [2024-04-24 21:31:24.747800] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.055 [2024-04-24 21:31:24.747811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.055 [2024-04-24 21:31:24.755822] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.055 [2024-04-24 21:31:24.755832] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.055 [2024-04-24 21:31:24.763841] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.055 [2024-04-24 21:31:24.763853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.055 [2024-04-24 21:31:24.771863] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.055 [2024-04-24 21:31:24.771875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.055 [2024-04-24 21:31:24.779886] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.055 [2024-04-24 21:31:24.779898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.055 [2024-04-24 21:31:24.787906] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.055 [2024-04-24 21:31:24.787917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.055 [2024-04-24 21:31:24.795929] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.055 [2024-04-24 21:31:24.795940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2842838) - No such process 00:16:02.055 21:31:24 -- target/zcopy.sh@49 -- # wait 2842838 00:16:02.055 21:31:24 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.055 21:31:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:02.055 21:31:24 -- common/autotest_common.sh@10 -- # set +x 00:16:02.055 21:31:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:02.055 21:31:24 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:02.055 21:31:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:02.055 21:31:24 -- common/autotest_common.sh@10 -- # set +x 00:16:02.055 delay0 00:16:02.055 21:31:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:02.055 21:31:24 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:02.055 21:31:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:02.055 21:31:24 -- common/autotest_common.sh@10 -- # set +x 00:16:02.055 21:31:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:02.055 21:31:24 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:02.055 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.314 [2024-04-24 21:31:24.969659] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:08.951 Initializing NVMe Controllers 00:16:08.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:08.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:08.951 Initialization complete. Launching workers. 00:16:08.951 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 108 00:16:08.951 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 391, failed to submit 37 00:16:08.951 success 205, unsuccess 186, failed 0 00:16:08.951 21:31:31 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:08.951 21:31:31 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:08.951 21:31:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:08.951 21:31:31 -- nvmf/common.sh@117 -- # sync 00:16:08.951 21:31:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:08.951 21:31:31 -- nvmf/common.sh@120 -- # set +e 00:16:08.951 21:31:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:08.951 21:31:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:08.951 rmmod nvme_tcp 00:16:08.951 rmmod nvme_fabrics 00:16:08.951 rmmod nvme_keyring 00:16:08.951 21:31:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:08.951 21:31:31 -- nvmf/common.sh@124 -- # set -e 00:16:08.951 21:31:31 -- nvmf/common.sh@125 -- # return 0 00:16:08.951 21:31:31 -- nvmf/common.sh@478 -- # '[' -n 2840954 ']' 00:16:08.951 21:31:31 -- nvmf/common.sh@479 -- # killprocess 2840954 00:16:08.951 21:31:31 -- common/autotest_common.sh@936 -- # '[' -z 2840954 ']' 00:16:08.951 21:31:31 -- common/autotest_common.sh@940 -- # kill -0 2840954 00:16:08.951 21:31:31 -- common/autotest_common.sh@941 -- # uname 00:16:08.951 21:31:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:08.951 21:31:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2840954 00:16:08.951 21:31:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:08.951 21:31:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:08.951 21:31:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2840954' 00:16:08.951 killing process with pid 2840954 00:16:08.951 21:31:31 -- common/autotest_common.sh@955 -- # kill 2840954 00:16:08.951 21:31:31 -- common/autotest_common.sh@960 -- # wait 2840954 00:16:08.951 21:31:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:08.951 21:31:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:08.951 21:31:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:08.951 21:31:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:08.951 21:31:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:08.951 21:31:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.951 21:31:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.951 21:31:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.856 21:31:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:10.856 00:16:10.856 real 0m32.918s 00:16:10.856 user 0m42.000s 00:16:10.856 sys 0m13.069s 00:16:10.856 21:31:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:10.856 21:31:33 -- common/autotest_common.sh@10 -- # set +x 00:16:10.856 ************************************ 00:16:10.856 END TEST nvmf_zcopy 00:16:10.856 ************************************ 00:16:10.856 21:31:33 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:10.856 21:31:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:10.856 21:31:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:10.856 21:31:33 -- common/autotest_common.sh@10 -- # set +x 00:16:11.117 ************************************ 00:16:11.117 START TEST nvmf_nmic 00:16:11.117 ************************************ 00:16:11.117 21:31:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:11.117 * Looking for test storage... 00:16:11.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.117 21:31:33 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:11.117 21:31:33 -- nvmf/common.sh@7 -- # uname -s 00:16:11.117 21:31:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.117 21:31:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.117 21:31:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.117 21:31:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.117 21:31:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.117 21:31:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.117 21:31:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.117 21:31:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.117 21:31:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.117 21:31:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.117 21:31:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:11.117 21:31:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:11.117 21:31:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.117 21:31:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.117 21:31:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.117 21:31:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.117 21:31:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.117 21:31:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.117 21:31:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.117 21:31:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.117 21:31:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.117 21:31:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.117 21:31:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.117 21:31:33 -- paths/export.sh@5 -- # export PATH 00:16:11.117 21:31:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.117 21:31:33 -- nvmf/common.sh@47 -- # : 0 00:16:11.117 21:31:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:11.117 21:31:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:11.117 21:31:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.117 21:31:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.117 21:31:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.117 21:31:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:11.117 21:31:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:11.117 21:31:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:11.117 21:31:33 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:11.117 21:31:33 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:11.117 21:31:33 -- target/nmic.sh@14 -- # nvmftestinit 00:16:11.117 21:31:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:11.117 21:31:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.117 21:31:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:11.117 21:31:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:11.117 21:31:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:11.117 21:31:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.117 21:31:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.117 21:31:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.117 21:31:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:11.117 21:31:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:11.117 21:31:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:11.117 21:31:33 -- common/autotest_common.sh@10 -- # set +x 00:16:17.703 21:31:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:17.703 21:31:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:17.703 21:31:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:17.703 21:31:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:17.703 21:31:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:17.703 21:31:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:17.703 21:31:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:17.703 21:31:40 -- nvmf/common.sh@295 -- # net_devs=() 00:16:17.703 21:31:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:17.703 21:31:40 -- nvmf/common.sh@296 -- # e810=() 00:16:17.703 21:31:40 -- nvmf/common.sh@296 -- # local -ga e810 00:16:17.703 21:31:40 -- nvmf/common.sh@297 -- # x722=() 00:16:17.703 21:31:40 -- nvmf/common.sh@297 -- # local -ga x722 00:16:17.703 21:31:40 -- nvmf/common.sh@298 -- # mlx=() 00:16:17.703 21:31:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:17.703 21:31:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:17.703 21:31:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:17.703 21:31:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:17.703 21:31:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:17.703 21:31:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:17.703 21:31:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:17.703 21:31:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:17.703 21:31:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:17.703 21:31:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:17.703 21:31:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:17.703 21:31:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:17.703 21:31:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:17.703 21:31:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:17.703 21:31:40 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:17.703 21:31:40 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:17.703 21:31:40 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:17.703 21:31:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:17.703 21:31:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.703 21:31:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:17.703 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:17.703 21:31:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.703 21:31:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.703 21:31:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.703 21:31:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.703 21:31:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:17.703 21:31:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.703 21:31:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:17.703 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:17.703 21:31:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.703 21:31:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.703 21:31:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.703 21:31:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.703 21:31:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:17.703 21:31:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:17.703 21:31:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:17.703 21:31:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:17.703 21:31:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.703 21:31:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.703 21:31:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:17.703 21:31:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.703 21:31:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:17.703 Found net devices under 0000:af:00.0: cvl_0_0 00:16:17.703 21:31:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.703 21:31:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.703 21:31:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.704 21:31:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:17.704 21:31:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.704 21:31:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:17.704 Found net devices under 0000:af:00.1: cvl_0_1 00:16:17.704 21:31:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.704 21:31:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:17.704 21:31:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:17.704 21:31:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:17.704 21:31:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:17.704 21:31:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:17.704 21:31:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.704 21:31:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.704 21:31:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:17.704 21:31:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:17.704 21:31:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:17.704 21:31:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:17.704 21:31:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:17.704 21:31:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:17.704 21:31:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.704 21:31:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:17.704 21:31:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:17.704 21:31:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:17.704 21:31:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:17.704 21:31:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:17.704 21:31:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:17.704 21:31:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:17.704 21:31:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:17.963 21:31:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:17.963 21:31:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:17.963 21:31:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:17.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:16:17.963 00:16:17.963 --- 10.0.0.2 ping statistics --- 00:16:17.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.963 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:16:17.963 21:31:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:17.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:16:17.963 00:16:17.963 --- 10.0.0.1 ping statistics --- 00:16:17.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.963 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:16:17.963 21:31:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.963 21:31:40 -- nvmf/common.sh@411 -- # return 0 00:16:17.963 21:31:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:17.963 21:31:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.963 21:31:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:17.964 21:31:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:17.964 21:31:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.964 21:31:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:17.964 21:31:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:17.964 21:31:40 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:17.964 21:31:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:17.964 21:31:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:17.964 21:31:40 -- common/autotest_common.sh@10 -- # set +x 00:16:17.964 21:31:40 -- nvmf/common.sh@470 -- # nvmfpid=2848657 00:16:17.964 21:31:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:17.964 21:31:40 -- nvmf/common.sh@471 -- # waitforlisten 2848657 00:16:17.964 21:31:40 -- common/autotest_common.sh@817 -- # '[' -z 2848657 ']' 00:16:17.964 21:31:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.964 21:31:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:17.964 21:31:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.964 21:31:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:17.964 21:31:40 -- common/autotest_common.sh@10 -- # set +x 00:16:17.964 [2024-04-24 21:31:40.758945] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:16:17.964 [2024-04-24 21:31:40.758999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.964 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.964 [2024-04-24 21:31:40.837078] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:18.223 [2024-04-24 21:31:40.911871] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.223 [2024-04-24 21:31:40.911913] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.223 [2024-04-24 21:31:40.911923] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.223 [2024-04-24 21:31:40.911932] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.223 [2024-04-24 21:31:40.911939] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.223 [2024-04-24 21:31:40.911990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.223 [2024-04-24 21:31:40.912082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.224 [2024-04-24 21:31:40.912166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:18.224 [2024-04-24 21:31:40.912168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.790 21:31:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:18.790 21:31:41 -- common/autotest_common.sh@850 -- # return 0 00:16:18.790 21:31:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:18.790 21:31:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:18.790 21:31:41 -- common/autotest_common.sh@10 -- # set +x 00:16:18.790 21:31:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.790 21:31:41 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:18.790 21:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.790 21:31:41 -- common/autotest_common.sh@10 -- # set +x 00:16:18.790 [2024-04-24 21:31:41.618322] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.790 21:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.790 21:31:41 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:18.790 21:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.790 21:31:41 -- common/autotest_common.sh@10 -- # set +x 00:16:18.790 Malloc0 00:16:18.790 21:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.790 21:31:41 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:18.790 21:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.790 21:31:41 -- common/autotest_common.sh@10 -- # set +x 00:16:18.790 21:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.790 21:31:41 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:18.790 21:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.790 21:31:41 -- common/autotest_common.sh@10 -- # set +x 00:16:18.790 21:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.790 21:31:41 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.790 21:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.790 21:31:41 -- common/autotest_common.sh@10 -- # set +x 00:16:18.790 [2024-04-24 21:31:41.672869] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.790 21:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.049 21:31:41 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:19.049 test case1: single bdev can't be used in multiple subsystems 00:16:19.049 21:31:41 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:19.049 21:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.049 21:31:41 -- common/autotest_common.sh@10 -- # set +x 00:16:19.049 21:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.049 21:31:41 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:19.049 21:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.049 21:31:41 -- common/autotest_common.sh@10 -- # set +x 00:16:19.049 21:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.049 21:31:41 -- target/nmic.sh@28 -- # nmic_status=0 00:16:19.049 21:31:41 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:19.049 21:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.049 21:31:41 -- common/autotest_common.sh@10 -- # set +x 00:16:19.049 [2024-04-24 21:31:41.696748] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:19.050 [2024-04-24 21:31:41.696769] subsystem.c:1934:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:19.050 [2024-04-24 21:31:41.696779] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.050 request: 00:16:19.050 { 00:16:19.050 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:19.050 "namespace": { 00:16:19.050 "bdev_name": "Malloc0", 00:16:19.050 "no_auto_visible": false 00:16:19.050 }, 00:16:19.050 "method": "nvmf_subsystem_add_ns", 00:16:19.050 "req_id": 1 00:16:19.050 } 00:16:19.050 Got JSON-RPC error response 00:16:19.050 response: 00:16:19.050 { 00:16:19.050 "code": -32602, 00:16:19.050 "message": "Invalid parameters" 00:16:19.050 } 00:16:19.050 21:31:41 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:19.050 21:31:41 -- target/nmic.sh@29 -- # nmic_status=1 00:16:19.050 21:31:41 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:19.050 21:31:41 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:19.050 Adding namespace failed - expected result. 00:16:19.050 21:31:41 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:19.050 test case2: host connect to nvmf target in multiple paths 00:16:19.050 21:31:41 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:19.050 21:31:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.050 21:31:41 -- common/autotest_common.sh@10 -- # set +x 00:16:19.050 [2024-04-24 21:31:41.708900] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:19.050 21:31:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.050 21:31:41 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:20.429 21:31:42 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:21.808 21:31:44 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:21.808 21:31:44 -- common/autotest_common.sh@1184 -- # local i=0 00:16:21.808 21:31:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:21.808 21:31:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:21.808 21:31:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:23.752 21:31:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:23.752 21:31:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:23.752 21:31:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:23.752 21:31:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:23.752 21:31:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:23.752 21:31:46 -- common/autotest_common.sh@1194 -- # return 0 00:16:23.752 21:31:46 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:23.752 [global] 00:16:23.752 thread=1 00:16:23.752 invalidate=1 00:16:23.752 rw=write 00:16:23.752 time_based=1 00:16:23.752 runtime=1 00:16:23.752 ioengine=libaio 00:16:23.752 direct=1 00:16:23.752 bs=4096 00:16:23.752 iodepth=1 00:16:23.752 norandommap=0 00:16:23.752 numjobs=1 00:16:23.752 00:16:23.752 verify_dump=1 00:16:23.752 verify_backlog=512 00:16:23.752 verify_state_save=0 00:16:23.752 do_verify=1 00:16:23.752 verify=crc32c-intel 00:16:23.752 [job0] 00:16:23.752 filename=/dev/nvme0n1 00:16:23.752 Could not set queue depth (nvme0n1) 00:16:24.016 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:24.016 fio-3.35 00:16:24.016 Starting 1 thread 00:16:24.984 00:16:24.984 job0: (groupid=0, jobs=1): err= 0: pid=2849885: Wed Apr 24 21:31:47 2024 00:16:24.984 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:24.984 slat (nsec): min=8793, max=42997, avg=9486.51, stdev=1414.44 00:16:24.984 clat (usec): min=384, max=1017, avg=547.00, stdev=52.53 00:16:24.984 lat (usec): min=393, max=1026, avg=556.49, stdev=52.63 00:16:24.984 clat percentiles (usec): 00:16:24.984 | 1.00th=[ 420], 5.00th=[ 486], 10.00th=[ 498], 20.00th=[ 506], 00:16:24.984 | 30.00th=[ 519], 40.00th=[ 537], 50.00th=[ 553], 60.00th=[ 570], 00:16:24.984 | 70.00th=[ 578], 80.00th=[ 578], 90.00th=[ 586], 95.00th=[ 594], 00:16:24.984 | 99.00th=[ 676], 99.50th=[ 799], 99.90th=[ 1012], 99.95th=[ 1020], 00:16:24.984 | 99.99th=[ 1020] 00:16:24.984 write: IOPS=1485, BW=5942KiB/s (6085kB/s)(5948KiB/1001msec); 0 zone resets 00:16:24.984 slat (nsec): min=11673, max=47917, avg=12840.93, stdev=2455.15 00:16:24.984 clat (usec): min=203, max=812, avg=271.95, stdev=78.91 00:16:24.984 lat (usec): min=216, max=859, avg=284.80, stdev=79.58 00:16:24.984 clat percentiles (usec): 00:16:24.984 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 223], 00:16:24.984 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 258], 60.00th=[ 269], 00:16:24.984 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 465], 00:16:24.984 | 99.00th=[ 635], 99.50th=[ 668], 99.90th=[ 775], 99.95th=[ 816], 00:16:24.984 | 99.99th=[ 816] 00:16:24.984 bw ( KiB/s): min= 4696, max= 4696, per=79.03%, avg=4696.00, stdev= 0.00, samples=1 00:16:24.984 iops : min= 1174, max= 1174, avg=1174.00, stdev= 0.00, samples=1 00:16:24.984 lat (usec) : 250=27.92%, 500=34.05%, 750=37.59%, 1000=0.32% 00:16:24.984 lat (msec) : 2=0.12% 00:16:24.984 cpu : usr=3.50%, sys=3.30%, ctx=2511, majf=0, minf=2 00:16:24.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:24.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.984 issued rwts: total=1024,1487,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:24.984 00:16:24.984 Run status group 0 (all jobs): 00:16:24.984 READ: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:16:24.984 WRITE: bw=5942KiB/s (6085kB/s), 5942KiB/s-5942KiB/s (6085kB/s-6085kB/s), io=5948KiB (6091kB), run=1001-1001msec 00:16:24.984 00:16:24.984 Disk stats (read/write): 00:16:24.984 nvme0n1: ios=1074/1064, merge=0/0, ticks=628/301, in_queue=929, util=93.19% 00:16:24.984 21:31:47 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:25.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:25.242 21:31:48 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:25.242 21:31:48 -- common/autotest_common.sh@1205 -- # local i=0 00:16:25.242 21:31:48 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:25.242 21:31:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.242 21:31:48 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:25.242 21:31:48 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.242 21:31:48 -- common/autotest_common.sh@1217 -- # return 0 00:16:25.242 21:31:48 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:25.242 21:31:48 -- target/nmic.sh@53 -- # nvmftestfini 00:16:25.243 21:31:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:25.243 21:31:48 -- nvmf/common.sh@117 -- # sync 00:16:25.243 21:31:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:25.243 21:31:48 -- nvmf/common.sh@120 -- # set +e 00:16:25.243 21:31:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:25.243 21:31:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:25.243 rmmod nvme_tcp 00:16:25.243 rmmod nvme_fabrics 00:16:25.243 rmmod nvme_keyring 00:16:25.501 21:31:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:25.501 21:31:48 -- nvmf/common.sh@124 -- # set -e 00:16:25.501 21:31:48 -- nvmf/common.sh@125 -- # return 0 00:16:25.501 21:31:48 -- nvmf/common.sh@478 -- # '[' -n 2848657 ']' 00:16:25.501 21:31:48 -- nvmf/common.sh@479 -- # killprocess 2848657 00:16:25.501 21:31:48 -- common/autotest_common.sh@936 -- # '[' -z 2848657 ']' 00:16:25.501 21:31:48 -- common/autotest_common.sh@940 -- # kill -0 2848657 00:16:25.501 21:31:48 -- common/autotest_common.sh@941 -- # uname 00:16:25.501 21:31:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:25.501 21:31:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2848657 00:16:25.501 21:31:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:25.501 21:31:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:25.501 21:31:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2848657' 00:16:25.501 killing process with pid 2848657 00:16:25.501 21:31:48 -- common/autotest_common.sh@955 -- # kill 2848657 00:16:25.501 21:31:48 -- common/autotest_common.sh@960 -- # wait 2848657 00:16:25.760 21:31:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:25.761 21:31:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:25.761 21:31:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:25.761 21:31:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:25.761 21:31:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:25.761 21:31:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.761 21:31:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.761 21:31:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.722 21:31:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:27.722 00:16:27.722 real 0m16.747s 00:16:27.722 user 0m39.706s 00:16:27.722 sys 0m6.340s 00:16:27.722 21:31:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:27.722 21:31:50 -- common/autotest_common.sh@10 -- # set +x 00:16:27.722 ************************************ 00:16:27.722 END TEST nvmf_nmic 00:16:27.722 ************************************ 00:16:27.722 21:31:50 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:27.722 21:31:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:27.722 21:31:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:27.722 21:31:50 -- common/autotest_common.sh@10 -- # set +x 00:16:27.981 ************************************ 00:16:27.981 START TEST nvmf_fio_target 00:16:27.981 ************************************ 00:16:27.981 21:31:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:27.981 * Looking for test storage... 00:16:27.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:27.981 21:31:50 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.981 21:31:50 -- nvmf/common.sh@7 -- # uname -s 00:16:27.981 21:31:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.981 21:31:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.981 21:31:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.981 21:31:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.981 21:31:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.981 21:31:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.981 21:31:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.981 21:31:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.981 21:31:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.981 21:31:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.981 21:31:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:27.981 21:31:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:27.981 21:31:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.981 21:31:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.981 21:31:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:27.981 21:31:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.981 21:31:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:27.981 21:31:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.981 21:31:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.981 21:31:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.981 21:31:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.982 21:31:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.982 21:31:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.982 21:31:50 -- paths/export.sh@5 -- # export PATH 00:16:27.982 21:31:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.982 21:31:50 -- nvmf/common.sh@47 -- # : 0 00:16:27.982 21:31:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:27.982 21:31:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:27.982 21:31:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.982 21:31:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.982 21:31:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.982 21:31:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:27.982 21:31:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:27.982 21:31:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:28.240 21:31:50 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:28.240 21:31:50 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:28.240 21:31:50 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:28.240 21:31:50 -- target/fio.sh@16 -- # nvmftestinit 00:16:28.240 21:31:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:28.240 21:31:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.240 21:31:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:28.240 21:31:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:28.240 21:31:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:28.240 21:31:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.240 21:31:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.240 21:31:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.240 21:31:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:28.240 21:31:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:28.240 21:31:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:28.240 21:31:50 -- common/autotest_common.sh@10 -- # set +x 00:16:34.809 21:31:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:34.809 21:31:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:34.809 21:31:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:34.809 21:31:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:34.809 21:31:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:34.809 21:31:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:34.809 21:31:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:34.809 21:31:57 -- nvmf/common.sh@295 -- # net_devs=() 00:16:34.809 21:31:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:34.809 21:31:57 -- nvmf/common.sh@296 -- # e810=() 00:16:34.809 21:31:57 -- nvmf/common.sh@296 -- # local -ga e810 00:16:34.809 21:31:57 -- nvmf/common.sh@297 -- # x722=() 00:16:34.809 21:31:57 -- nvmf/common.sh@297 -- # local -ga x722 00:16:34.809 21:31:57 -- nvmf/common.sh@298 -- # mlx=() 00:16:34.809 21:31:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:34.809 21:31:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.809 21:31:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.809 21:31:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.809 21:31:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.809 21:31:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.809 21:31:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.809 21:31:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.809 21:31:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.809 21:31:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.809 21:31:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.809 21:31:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.809 21:31:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:34.809 21:31:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:34.809 21:31:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:34.809 21:31:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:34.809 21:31:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:34.809 21:31:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:34.809 21:31:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.809 21:31:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:34.809 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:34.809 21:31:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.809 21:31:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.809 21:31:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.809 21:31:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.809 21:31:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.809 21:31:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.809 21:31:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:34.809 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:34.809 21:31:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.809 21:31:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.809 21:31:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.809 21:31:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.809 21:31:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.809 21:31:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:34.809 21:31:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:34.809 21:31:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:34.809 21:31:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.809 21:31:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.809 21:31:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:34.809 21:31:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.809 21:31:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:34.809 Found net devices under 0000:af:00.0: cvl_0_0 00:16:34.809 21:31:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.809 21:31:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.809 21:31:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.809 21:31:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:34.809 21:31:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.809 21:31:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:34.809 Found net devices under 0000:af:00.1: cvl_0_1 00:16:34.809 21:31:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.809 21:31:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:34.809 21:31:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:34.809 21:31:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:34.809 21:31:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:34.809 21:31:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:34.809 21:31:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.809 21:31:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.809 21:31:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.809 21:31:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:34.809 21:31:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.809 21:31:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.810 21:31:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:34.810 21:31:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.810 21:31:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.810 21:31:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:34.810 21:31:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:34.810 21:31:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.810 21:31:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.810 21:31:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.810 21:31:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.810 21:31:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:34.810 21:31:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.810 21:31:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:35.069 21:31:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:35.069 21:31:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:35.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:16:35.069 00:16:35.069 --- 10.0.0.2 ping statistics --- 00:16:35.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.069 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:16:35.069 21:31:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:35.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:16:35.069 00:16:35.069 --- 10.0.0.1 ping statistics --- 00:16:35.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.069 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:16:35.069 21:31:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.069 21:31:57 -- nvmf/common.sh@411 -- # return 0 00:16:35.069 21:31:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:35.069 21:31:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.069 21:31:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:35.069 21:31:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:35.069 21:31:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.069 21:31:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:35.069 21:31:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:35.069 21:31:57 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:35.069 21:31:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:35.069 21:31:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:35.069 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:16:35.069 21:31:57 -- nvmf/common.sh@470 -- # nvmfpid=2853868 00:16:35.069 21:31:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:35.069 21:31:57 -- nvmf/common.sh@471 -- # waitforlisten 2853868 00:16:35.069 21:31:57 -- common/autotest_common.sh@817 -- # '[' -z 2853868 ']' 00:16:35.069 21:31:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.069 21:31:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:35.069 21:31:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.069 21:31:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:35.069 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:16:35.069 [2024-04-24 21:31:57.839512] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:16:35.070 [2024-04-24 21:31:57.839560] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.070 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.070 [2024-04-24 21:31:57.913171] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:35.357 [2024-04-24 21:31:57.982975] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.357 [2024-04-24 21:31:57.983015] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.357 [2024-04-24 21:31:57.983024] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.357 [2024-04-24 21:31:57.983032] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.357 [2024-04-24 21:31:57.983039] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.357 [2024-04-24 21:31:57.983090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.357 [2024-04-24 21:31:57.983188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.357 [2024-04-24 21:31:57.983272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:35.357 [2024-04-24 21:31:57.983273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.922 21:31:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:35.922 21:31:58 -- common/autotest_common.sh@850 -- # return 0 00:16:35.922 21:31:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:35.922 21:31:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:35.922 21:31:58 -- common/autotest_common.sh@10 -- # set +x 00:16:35.922 21:31:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.922 21:31:58 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:36.180 [2024-04-24 21:31:58.827645] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.180 21:31:58 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:36.180 21:31:59 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:36.439 21:31:59 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:36.439 21:31:59 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:36.439 21:31:59 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:36.697 21:31:59 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:36.697 21:31:59 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:36.954 21:31:59 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:36.954 21:31:59 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:37.213 21:31:59 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:37.213 21:32:00 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:37.213 21:32:00 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:37.471 21:32:00 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:37.471 21:32:00 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:37.729 21:32:00 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:37.729 21:32:00 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:37.729 21:32:00 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:37.987 21:32:00 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:37.987 21:32:00 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:38.245 21:32:00 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:38.245 21:32:00 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:38.503 21:32:01 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:38.503 [2024-04-24 21:32:01.292842] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.503 21:32:01 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:38.761 21:32:01 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:39.019 21:32:01 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:40.401 21:32:02 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:40.401 21:32:02 -- common/autotest_common.sh@1184 -- # local i=0 00:16:40.401 21:32:02 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:40.401 21:32:02 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:16:40.401 21:32:02 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:16:40.401 21:32:02 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:42.349 21:32:04 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:42.349 21:32:04 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:42.349 21:32:04 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:42.349 21:32:05 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:16:42.349 21:32:05 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:42.349 21:32:05 -- common/autotest_common.sh@1194 -- # return 0 00:16:42.349 21:32:05 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:42.349 [global] 00:16:42.349 thread=1 00:16:42.349 invalidate=1 00:16:42.349 rw=write 00:16:42.349 time_based=1 00:16:42.349 runtime=1 00:16:42.349 ioengine=libaio 00:16:42.349 direct=1 00:16:42.349 bs=4096 00:16:42.349 iodepth=1 00:16:42.349 norandommap=0 00:16:42.349 numjobs=1 00:16:42.349 00:16:42.349 verify_dump=1 00:16:42.349 verify_backlog=512 00:16:42.349 verify_state_save=0 00:16:42.349 do_verify=1 00:16:42.349 verify=crc32c-intel 00:16:42.349 [job0] 00:16:42.349 filename=/dev/nvme0n1 00:16:42.349 [job1] 00:16:42.349 filename=/dev/nvme0n2 00:16:42.349 [job2] 00:16:42.349 filename=/dev/nvme0n3 00:16:42.349 [job3] 00:16:42.349 filename=/dev/nvme0n4 00:16:42.349 Could not set queue depth (nvme0n1) 00:16:42.349 Could not set queue depth (nvme0n2) 00:16:42.349 Could not set queue depth (nvme0n3) 00:16:42.349 Could not set queue depth (nvme0n4) 00:16:42.608 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:42.608 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:42.608 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:42.608 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:42.608 fio-3.35 00:16:42.608 Starting 4 threads 00:16:43.984 00:16:43.984 job0: (groupid=0, jobs=1): err= 0: pid=2855330: Wed Apr 24 21:32:06 2024 00:16:43.984 read: IOPS=514, BW=2059KiB/s (2109kB/s)(2088KiB/1014msec) 00:16:43.984 slat (nsec): min=9075, max=46709, avg=10203.89, stdev=2922.29 00:16:43.984 clat (usec): min=310, max=42945, avg=1312.72, stdev=5663.36 00:16:43.984 lat (usec): min=320, max=42970, avg=1322.93, stdev=5665.18 00:16:43.984 clat percentiles (usec): 00:16:43.984 | 1.00th=[ 330], 5.00th=[ 371], 10.00th=[ 420], 20.00th=[ 506], 00:16:43.984 | 30.00th=[ 515], 40.00th=[ 519], 50.00th=[ 529], 60.00th=[ 529], 00:16:43.984 | 70.00th=[ 537], 80.00th=[ 553], 90.00th=[ 603], 95.00th=[ 676], 00:16:43.984 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:16:43.984 | 99.99th=[42730] 00:16:43.984 write: IOPS=1009, BW=4039KiB/s (4136kB/s)(4096KiB/1014msec); 0 zone resets 00:16:43.984 slat (nsec): min=12314, max=40061, avg=13968.84, stdev=2320.91 00:16:43.984 clat (usec): min=204, max=925, avg=296.34, stdev=98.57 00:16:43.984 lat (usec): min=217, max=939, avg=310.31, stdev=98.96 00:16:43.984 clat percentiles (usec): 00:16:43.984 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 231], 00:16:43.984 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 255], 60.00th=[ 285], 00:16:43.984 | 70.00th=[ 293], 80.00th=[ 338], 90.00th=[ 465], 95.00th=[ 486], 00:16:43.984 | 99.00th=[ 717], 99.50th=[ 717], 99.90th=[ 750], 99.95th=[ 922], 00:16:43.984 | 99.99th=[ 922] 00:16:43.984 bw ( KiB/s): min= 2440, max= 5752, per=23.98%, avg=4096.00, stdev=2341.94, samples=2 00:16:43.984 iops : min= 610, max= 1438, avg=1024.00, stdev=585.48, samples=2 00:16:43.984 lat (usec) : 250=31.24%, 500=36.74%, 750=31.24%, 1000=0.13% 00:16:43.984 lat (msec) : 50=0.65% 00:16:43.984 cpu : usr=2.27%, sys=2.07%, ctx=1548, majf=0, minf=1 00:16:43.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.984 issued rwts: total=522,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:43.984 job1: (groupid=0, jobs=1): err= 0: pid=2855346: Wed Apr 24 21:32:06 2024 00:16:43.984 read: IOPS=20, BW=81.8KiB/s (83.8kB/s)(84.0KiB/1027msec) 00:16:43.984 slat (nsec): min=10170, max=24531, avg=17784.67, stdev=6372.04 00:16:43.984 clat (usec): min=40977, max=42135, avg=41720.89, stdev=431.47 00:16:43.984 lat (usec): min=40987, max=42158, avg=41738.68, stdev=435.40 00:16:43.984 clat percentiles (usec): 00:16:43.984 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:43.984 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:43.984 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:43.984 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:43.984 | 99.99th=[42206] 00:16:43.984 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:16:43.984 slat (nsec): min=6204, max=31369, avg=13565.43, stdev=2338.68 00:16:43.984 clat (usec): min=223, max=644, avg=276.20, stdev=75.01 00:16:43.984 lat (usec): min=236, max=664, avg=289.76, stdev=73.78 00:16:43.984 clat percentiles (usec): 00:16:43.984 | 1.00th=[ 225], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 235], 00:16:43.984 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 255], 00:16:43.984 | 70.00th=[ 262], 80.00th=[ 297], 90.00th=[ 359], 95.00th=[ 441], 00:16:43.984 | 99.00th=[ 537], 99.50th=[ 537], 99.90th=[ 644], 99.95th=[ 644], 00:16:43.984 | 99.99th=[ 644] 00:16:43.984 bw ( KiB/s): min= 4096, max= 4096, per=23.98%, avg=4096.00, stdev= 0.00, samples=1 00:16:43.984 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:43.984 lat (usec) : 250=52.72%, 500=39.02%, 750=4.32% 00:16:43.984 lat (msec) : 50=3.94% 00:16:43.984 cpu : usr=1.17%, sys=0.29%, ctx=533, majf=0, minf=1 00:16:43.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.984 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:43.984 job2: (groupid=0, jobs=1): err= 0: pid=2855367: Wed Apr 24 21:32:06 2024 00:16:43.984 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:43.984 slat (nsec): min=9141, max=44833, avg=9851.15, stdev=1328.81 00:16:43.984 clat (usec): min=398, max=1011, avg=598.17, stdev=64.58 00:16:43.984 lat (usec): min=408, max=1021, avg=608.02, stdev=64.56 00:16:43.984 clat percentiles (usec): 00:16:43.984 | 1.00th=[ 433], 5.00th=[ 498], 10.00th=[ 529], 20.00th=[ 553], 00:16:43.984 | 30.00th=[ 562], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 619], 00:16:43.984 | 70.00th=[ 627], 80.00th=[ 652], 90.00th=[ 668], 95.00th=[ 685], 00:16:43.984 | 99.00th=[ 750], 99.50th=[ 766], 99.90th=[ 996], 99.95th=[ 1012], 00:16:43.984 | 99.99th=[ 1012] 00:16:43.984 write: IOPS=1312, BW=5251KiB/s (5377kB/s)(5256KiB/1001msec); 0 zone resets 00:16:43.984 slat (nsec): min=12651, max=48225, avg=13904.14, stdev=2215.24 00:16:43.984 clat (usec): min=207, max=656, avg=268.04, stdev=61.19 00:16:43.984 lat (usec): min=221, max=703, avg=281.95, stdev=61.50 00:16:43.984 clat percentiles (usec): 00:16:43.984 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 227], 00:16:43.984 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 251], 60.00th=[ 265], 00:16:43.984 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 326], 95.00th=[ 367], 00:16:43.984 | 99.00th=[ 562], 99.50th=[ 562], 99.90th=[ 586], 99.95th=[ 660], 00:16:43.984 | 99.99th=[ 660] 00:16:43.984 bw ( KiB/s): min= 4416, max= 4416, per=25.85%, avg=4416.00, stdev= 0.00, samples=1 00:16:43.984 iops : min= 1104, max= 1104, avg=1104.00, stdev= 0.00, samples=1 00:16:43.984 lat (usec) : 250=27.54%, 500=30.20%, 750=41.87%, 1000=0.34% 00:16:43.984 lat (msec) : 2=0.04% 00:16:43.984 cpu : usr=2.20%, sys=4.20%, ctx=2340, majf=0, minf=2 00:16:43.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.985 issued rwts: total=1024,1314,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:43.985 job3: (groupid=0, jobs=1): err= 0: pid=2855375: Wed Apr 24 21:32:06 2024 00:16:43.985 read: IOPS=1094, BW=4380KiB/s (4485kB/s)(4384KiB/1001msec) 00:16:43.985 slat (nsec): min=8772, max=38162, avg=10338.35, stdev=3484.72 00:16:43.985 clat (usec): min=309, max=786, avg=510.40, stdev=55.80 00:16:43.985 lat (usec): min=319, max=811, avg=520.74, stdev=57.05 00:16:43.985 clat percentiles (usec): 00:16:43.985 | 1.00th=[ 359], 5.00th=[ 408], 10.00th=[ 453], 20.00th=[ 482], 00:16:43.985 | 30.00th=[ 494], 40.00th=[ 506], 50.00th=[ 515], 60.00th=[ 519], 00:16:43.985 | 70.00th=[ 529], 80.00th=[ 537], 90.00th=[ 562], 95.00th=[ 594], 00:16:43.985 | 99.00th=[ 668], 99.50th=[ 734], 99.90th=[ 758], 99.95th=[ 791], 00:16:43.985 | 99.99th=[ 791] 00:16:43.985 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:43.985 slat (nsec): min=11664, max=43358, avg=12581.21, stdev=1680.10 00:16:43.985 clat (usec): min=195, max=836, avg=263.19, stdev=63.17 00:16:43.985 lat (usec): min=207, max=875, avg=275.77, stdev=63.51 00:16:43.985 clat percentiles (usec): 00:16:43.985 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 219], 00:16:43.985 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 258], 00:16:43.985 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 330], 95.00th=[ 392], 00:16:43.985 | 99.00th=[ 553], 99.50th=[ 553], 99.90th=[ 562], 99.95th=[ 840], 00:16:43.985 | 99.99th=[ 840] 00:16:43.985 bw ( KiB/s): min= 6232, max= 6232, per=36.48%, avg=6232.00, stdev= 0.00, samples=1 00:16:43.985 iops : min= 1558, max= 1558, avg=1558.00, stdev= 0.00, samples=1 00:16:43.985 lat (usec) : 250=31.65%, 500=40.43%, 750=27.74%, 1000=0.19% 00:16:43.985 cpu : usr=2.00%, sys=2.90%, ctx=2632, majf=0, minf=1 00:16:43.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.985 issued rwts: total=1096,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:43.985 00:16:43.985 Run status group 0 (all jobs): 00:16:43.985 READ: bw=10.1MiB/s (10.6MB/s), 81.8KiB/s-4380KiB/s (83.8kB/s-4485kB/s), io=10.4MiB (10.9MB), run=1001-1027msec 00:16:43.985 WRITE: bw=16.7MiB/s (17.5MB/s), 1994KiB/s-6138KiB/s (2042kB/s-6285kB/s), io=17.1MiB (18.0MB), run=1001-1027msec 00:16:43.985 00:16:43.985 Disk stats (read/write): 00:16:43.985 nvme0n1: ios=572/1024, merge=0/0, ticks=1166/288, in_queue=1454, util=85.77% 00:16:43.985 nvme0n2: ios=66/512, merge=0/0, ticks=797/135, in_queue=932, util=90.27% 00:16:43.985 nvme0n3: ios=920/1024, merge=0/0, ticks=778/274, in_queue=1052, util=90.71% 00:16:43.985 nvme0n4: ios=1054/1024, merge=0/0, ticks=604/267, in_queue=871, util=96.31% 00:16:43.985 21:32:06 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:43.985 [global] 00:16:43.985 thread=1 00:16:43.985 invalidate=1 00:16:43.985 rw=randwrite 00:16:43.985 time_based=1 00:16:43.985 runtime=1 00:16:43.985 ioengine=libaio 00:16:43.985 direct=1 00:16:43.985 bs=4096 00:16:43.985 iodepth=1 00:16:43.985 norandommap=0 00:16:43.985 numjobs=1 00:16:43.985 00:16:43.985 verify_dump=1 00:16:43.985 verify_backlog=512 00:16:43.985 verify_state_save=0 00:16:43.985 do_verify=1 00:16:43.985 verify=crc32c-intel 00:16:43.985 [job0] 00:16:43.985 filename=/dev/nvme0n1 00:16:43.985 [job1] 00:16:43.985 filename=/dev/nvme0n2 00:16:43.985 [job2] 00:16:43.985 filename=/dev/nvme0n3 00:16:43.985 [job3] 00:16:43.985 filename=/dev/nvme0n4 00:16:43.985 Could not set queue depth (nvme0n1) 00:16:43.985 Could not set queue depth (nvme0n2) 00:16:43.985 Could not set queue depth (nvme0n3) 00:16:43.985 Could not set queue depth (nvme0n4) 00:16:44.242 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.242 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.242 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.242 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.242 fio-3.35 00:16:44.242 Starting 4 threads 00:16:45.618 00:16:45.618 job0: (groupid=0, jobs=1): err= 0: pid=2855767: Wed Apr 24 21:32:08 2024 00:16:45.618 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:45.618 slat (nsec): min=8866, max=28634, avg=9543.08, stdev=1031.20 00:16:45.618 clat (usec): min=390, max=690, avg=563.71, stdev=45.29 00:16:45.618 lat (usec): min=399, max=699, avg=573.25, stdev=45.20 00:16:45.618 clat percentiles (usec): 00:16:45.618 | 1.00th=[ 416], 5.00th=[ 445], 10.00th=[ 498], 20.00th=[ 553], 00:16:45.618 | 30.00th=[ 562], 40.00th=[ 570], 50.00th=[ 578], 60.00th=[ 578], 00:16:45.618 | 70.00th=[ 586], 80.00th=[ 594], 90.00th=[ 594], 95.00th=[ 603], 00:16:45.618 | 99.00th=[ 635], 99.50th=[ 652], 99.90th=[ 676], 99.95th=[ 693], 00:16:45.618 | 99.99th=[ 693] 00:16:45.619 write: IOPS=1459, BW=5838KiB/s (5978kB/s)(5844KiB/1001msec); 0 zone resets 00:16:45.619 slat (nsec): min=12058, max=46510, avg=13287.33, stdev=2281.74 00:16:45.619 clat (usec): min=196, max=798, avg=263.15, stdev=80.26 00:16:45.619 lat (usec): min=209, max=828, avg=276.44, stdev=81.10 00:16:45.619 clat percentiles (usec): 00:16:45.619 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 217], 00:16:45.619 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 249], 00:16:45.619 | 70.00th=[ 269], 80.00th=[ 293], 90.00th=[ 322], 95.00th=[ 420], 00:16:45.619 | 99.00th=[ 668], 99.50th=[ 742], 99.90th=[ 766], 99.95th=[ 799], 00:16:45.619 | 99.99th=[ 799] 00:16:45.619 bw ( KiB/s): min= 4992, max= 4992, per=28.27%, avg=4992.00, stdev= 0.00, samples=1 00:16:45.619 iops : min= 1248, max= 1248, avg=1248.00, stdev= 0.00, samples=1 00:16:45.619 lat (usec) : 250=35.86%, 500=25.23%, 750=38.75%, 1000=0.16% 00:16:45.619 cpu : usr=2.40%, sys=4.40%, ctx=2488, majf=0, minf=1 00:16:45.619 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:45.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.619 issued rwts: total=1024,1461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.619 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:45.619 job1: (groupid=0, jobs=1): err= 0: pid=2855787: Wed Apr 24 21:32:08 2024 00:16:45.619 read: IOPS=25, BW=101KiB/s (104kB/s)(104KiB/1027msec) 00:16:45.619 slat (nsec): min=9253, max=25365, avg=20697.50, stdev=6131.27 00:16:45.619 clat (usec): min=473, max=42032, avg=33814.91, stdev=16551.02 00:16:45.619 lat (usec): min=483, max=42057, avg=33835.60, stdev=16556.20 00:16:45.619 clat percentiles (usec): 00:16:45.619 | 1.00th=[ 474], 5.00th=[ 478], 10.00th=[ 611], 20.00th=[39060], 00:16:45.619 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:16:45.619 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:45.619 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:45.619 | 99.99th=[42206] 00:16:45.619 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:16:45.619 slat (nsec): min=12198, max=38905, avg=13319.56, stdev=1841.27 00:16:45.619 clat (usec): min=202, max=717, avg=270.74, stdev=85.83 00:16:45.619 lat (usec): min=214, max=755, avg=284.06, stdev=86.37 00:16:45.619 clat percentiles (usec): 00:16:45.619 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 215], 00:16:45.619 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 245], 00:16:45.619 | 70.00th=[ 281], 80.00th=[ 306], 90.00th=[ 371], 95.00th=[ 478], 00:16:45.619 | 99.00th=[ 562], 99.50th=[ 570], 99.90th=[ 717], 99.95th=[ 717], 00:16:45.619 | 99.99th=[ 717] 00:16:45.619 bw ( KiB/s): min= 4096, max= 4096, per=23.20%, avg=4096.00, stdev= 0.00, samples=1 00:16:45.619 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:45.619 lat (usec) : 250=60.04%, 500=31.97%, 750=4.09% 00:16:45.619 lat (msec) : 50=3.90% 00:16:45.619 cpu : usr=0.78%, sys=0.78%, ctx=540, majf=0, minf=2 00:16:45.619 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:45.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.619 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.619 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:45.619 job2: (groupid=0, jobs=1): err= 0: pid=2855808: Wed Apr 24 21:32:08 2024 00:16:45.619 read: IOPS=515, BW=2063KiB/s (2113kB/s)(2092KiB/1014msec) 00:16:45.619 slat (nsec): min=9082, max=26571, avg=9955.72, stdev=2060.63 00:16:45.619 clat (usec): min=350, max=42927, avg=1378.81, stdev=5932.67 00:16:45.619 lat (usec): min=360, max=42951, avg=1388.76, stdev=5934.41 00:16:45.619 clat percentiles (usec): 00:16:45.619 | 1.00th=[ 379], 5.00th=[ 416], 10.00th=[ 490], 20.00th=[ 506], 00:16:45.619 | 30.00th=[ 510], 40.00th=[ 515], 50.00th=[ 519], 60.00th=[ 519], 00:16:45.619 | 70.00th=[ 523], 80.00th=[ 529], 90.00th=[ 537], 95.00th=[ 545], 00:16:45.619 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:16:45.619 | 99.99th=[42730] 00:16:45.619 write: IOPS=1009, BW=4039KiB/s (4136kB/s)(4096KiB/1014msec); 0 zone resets 00:16:45.619 slat (nsec): min=12222, max=45242, avg=13393.43, stdev=1972.32 00:16:45.619 clat (usec): min=202, max=2210, avg=261.47, stdev=80.01 00:16:45.619 lat (usec): min=215, max=2224, avg=274.86, stdev=80.30 00:16:45.619 clat percentiles (usec): 00:16:45.619 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 227], 00:16:45.619 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 253], 00:16:45.619 | 70.00th=[ 262], 80.00th=[ 281], 90.00th=[ 330], 95.00th=[ 351], 00:16:45.619 | 99.00th=[ 465], 99.50th=[ 486], 99.90th=[ 693], 99.95th=[ 2212], 00:16:45.619 | 99.99th=[ 2212] 00:16:45.619 bw ( KiB/s): min= 1672, max= 6520, per=23.20%, avg=4096.00, stdev=3428.05, samples=2 00:16:45.619 iops : min= 418, max= 1630, avg=1024.00, stdev=857.01, samples=2 00:16:45.619 lat (usec) : 250=37.94%, 500=32.77%, 750=28.51% 00:16:45.619 lat (msec) : 4=0.06%, 50=0.71% 00:16:45.619 cpu : usr=0.69%, sys=2.27%, ctx=1548, majf=0, minf=1 00:16:45.619 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:45.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.619 issued rwts: total=523,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.619 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:45.619 job3: (groupid=0, jobs=1): err= 0: pid=2855815: Wed Apr 24 21:32:08 2024 00:16:45.619 read: IOPS=1066, BW=4268KiB/s (4370kB/s)(4272KiB/1001msec) 00:16:45.619 slat (nsec): min=8890, max=34299, avg=9596.92, stdev=1038.47 00:16:45.619 clat (usec): min=388, max=616, avg=542.34, stdev=41.17 00:16:45.619 lat (usec): min=399, max=625, avg=551.94, stdev=41.12 00:16:45.619 clat percentiles (usec): 00:16:45.619 | 1.00th=[ 408], 5.00th=[ 445], 10.00th=[ 478], 20.00th=[ 529], 00:16:45.619 | 30.00th=[ 537], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 562], 00:16:45.619 | 70.00th=[ 562], 80.00th=[ 570], 90.00th=[ 578], 95.00th=[ 586], 00:16:45.619 | 99.00th=[ 594], 99.50th=[ 603], 99.90th=[ 611], 99.95th=[ 619], 00:16:45.619 | 99.99th=[ 619] 00:16:45.619 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:45.619 slat (nsec): min=11784, max=46458, avg=12915.13, stdev=2278.16 00:16:45.619 clat (usec): min=190, max=680, avg=249.20, stdev=66.75 00:16:45.619 lat (usec): min=202, max=726, avg=262.12, stdev=67.76 00:16:45.619 clat percentiles (usec): 00:16:45.619 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 208], 00:16:45.619 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 243], 00:16:45.619 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 363], 00:16:45.619 | 99.00th=[ 570], 99.50th=[ 644], 99.90th=[ 668], 99.95th=[ 685], 00:16:45.619 | 99.99th=[ 685] 00:16:45.619 bw ( KiB/s): min= 5680, max= 5680, per=32.17%, avg=5680.00, stdev= 0.00, samples=1 00:16:45.619 iops : min= 1420, max= 1420, avg=1420.00, stdev= 0.00, samples=1 00:16:45.619 lat (usec) : 250=38.21%, 500=25.42%, 750=36.37% 00:16:45.619 cpu : usr=2.20%, sys=2.60%, ctx=2605, majf=0, minf=1 00:16:45.619 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:45.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.619 issued rwts: total=1068,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.619 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:45.619 00:16:45.619 Run status group 0 (all jobs): 00:16:45.619 READ: bw=10.0MiB/s (10.5MB/s), 101KiB/s-4268KiB/s (104kB/s-4370kB/s), io=10.3MiB (10.8MB), run=1001-1027msec 00:16:45.619 WRITE: bw=17.2MiB/s (18.1MB/s), 1994KiB/s-6138KiB/s (2042kB/s-6285kB/s), io=17.7MiB (18.6MB), run=1001-1027msec 00:16:45.619 00:16:45.619 Disk stats (read/write): 00:16:45.619 nvme0n1: ios=935/1024, merge=0/0, ticks=1354/275, in_queue=1629, util=87.17% 00:16:45.619 nvme0n2: ios=43/512, merge=0/0, ticks=1554/134, in_queue=1688, util=87.10% 00:16:45.619 nvme0n3: ios=541/1024, merge=0/0, ticks=1373/266, in_queue=1639, util=95.42% 00:16:45.619 nvme0n4: ios=1027/1024, merge=0/0, ticks=1132/257, in_queue=1389, util=96.76% 00:16:45.619 21:32:08 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:45.619 [global] 00:16:45.619 thread=1 00:16:45.619 invalidate=1 00:16:45.619 rw=write 00:16:45.619 time_based=1 00:16:45.619 runtime=1 00:16:45.619 ioengine=libaio 00:16:45.619 direct=1 00:16:45.619 bs=4096 00:16:45.619 iodepth=128 00:16:45.619 norandommap=0 00:16:45.619 numjobs=1 00:16:45.619 00:16:45.619 verify_dump=1 00:16:45.619 verify_backlog=512 00:16:45.619 verify_state_save=0 00:16:45.620 do_verify=1 00:16:45.620 verify=crc32c-intel 00:16:45.620 [job0] 00:16:45.620 filename=/dev/nvme0n1 00:16:45.620 [job1] 00:16:45.620 filename=/dev/nvme0n2 00:16:45.620 [job2] 00:16:45.620 filename=/dev/nvme0n3 00:16:45.620 [job3] 00:16:45.620 filename=/dev/nvme0n4 00:16:45.620 Could not set queue depth (nvme0n1) 00:16:45.620 Could not set queue depth (nvme0n2) 00:16:45.620 Could not set queue depth (nvme0n3) 00:16:45.620 Could not set queue depth (nvme0n4) 00:16:45.878 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:45.878 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:45.878 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:45.878 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:45.878 fio-3.35 00:16:45.878 Starting 4 threads 00:16:47.255 00:16:47.255 job0: (groupid=0, jobs=1): err= 0: pid=2856214: Wed Apr 24 21:32:09 2024 00:16:47.255 read: IOPS=4193, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1007msec) 00:16:47.255 slat (usec): min=2, max=18404, avg=117.74, stdev=807.28 00:16:47.255 clat (usec): min=4321, max=79620, avg=14589.68, stdev=9772.27 00:16:47.255 lat (usec): min=6074, max=79634, avg=14707.42, stdev=9840.27 00:16:47.255 clat percentiles (usec): 00:16:47.255 | 1.00th=[ 6521], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9372], 00:16:47.255 | 30.00th=[ 9896], 40.00th=[10552], 50.00th=[11600], 60.00th=[12911], 00:16:47.255 | 70.00th=[14353], 80.00th=[16909], 90.00th=[22152], 95.00th=[33817], 00:16:47.255 | 99.00th=[71828], 99.50th=[76022], 99.90th=[79168], 99.95th=[79168], 00:16:47.255 | 99.99th=[79168] 00:16:47.255 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:16:47.255 slat (usec): min=3, max=10345, avg=101.76, stdev=543.90 00:16:47.255 clat (usec): min=1909, max=79577, avg=14343.67, stdev=8783.37 00:16:47.255 lat (usec): min=1925, max=79583, avg=14445.42, stdev=8826.19 00:16:47.255 clat percentiles (usec): 00:16:47.255 | 1.00th=[ 5211], 5.00th=[ 6587], 10.00th=[ 6915], 20.00th=[ 8586], 00:16:47.255 | 30.00th=[ 9634], 40.00th=[10945], 50.00th=[12125], 60.00th=[13173], 00:16:47.255 | 70.00th=[15533], 80.00th=[18744], 90.00th=[21627], 95.00th=[27919], 00:16:47.255 | 99.00th=[52691], 99.50th=[60031], 99.90th=[66323], 99.95th=[66323], 00:16:47.255 | 99.99th=[79168] 00:16:47.255 bw ( KiB/s): min=14888, max=21968, per=29.97%, avg=18428.00, stdev=5006.32, samples=2 00:16:47.255 iops : min= 3722, max= 5492, avg=4607.00, stdev=1251.58, samples=2 00:16:47.255 lat (msec) : 2=0.03%, 4=0.11%, 10=31.12%, 20=55.07%, 50=12.17% 00:16:47.255 lat (msec) : 100=1.49% 00:16:47.255 cpu : usr=4.97%, sys=5.86%, ctx=432, majf=0, minf=1 00:16:47.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:47.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:47.256 issued rwts: total=4223,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:47.256 job1: (groupid=0, jobs=1): err= 0: pid=2856231: Wed Apr 24 21:32:09 2024 00:16:47.256 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec) 00:16:47.256 slat (usec): min=2, max=9707, avg=84.32, stdev=552.87 00:16:47.256 clat (usec): min=4824, max=27788, avg=11472.79, stdev=3195.23 00:16:47.256 lat (usec): min=4830, max=27805, avg=11557.12, stdev=3223.82 00:16:47.256 clat percentiles (usec): 00:16:47.256 | 1.00th=[ 5604], 5.00th=[ 7570], 10.00th=[ 8094], 20.00th=[ 9110], 00:16:47.256 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10814], 60.00th=[11600], 00:16:47.256 | 70.00th=[12387], 80.00th=[13698], 90.00th=[15401], 95.00th=[17171], 00:16:47.256 | 99.00th=[21627], 99.50th=[24511], 99.90th=[27657], 99.95th=[27657], 00:16:47.256 | 99.99th=[27919] 00:16:47.256 write: IOPS=5348, BW=20.9MiB/s (21.9MB/s)(21.1MiB/1010msec); 0 zone resets 00:16:47.256 slat (usec): min=3, max=19841, avg=98.29, stdev=631.19 00:16:47.256 clat (usec): min=4197, max=42239, avg=12835.62, stdev=5489.28 00:16:47.256 lat (usec): min=4309, max=42251, avg=12933.90, stdev=5518.11 00:16:47.256 clat percentiles (usec): 00:16:47.256 | 1.00th=[ 5866], 5.00th=[ 7308], 10.00th=[ 7767], 20.00th=[ 8455], 00:16:47.256 | 30.00th=[ 9634], 40.00th=[10552], 50.00th=[11469], 60.00th=[12649], 00:16:47.256 | 70.00th=[14353], 80.00th=[16909], 90.00th=[18744], 95.00th=[21103], 00:16:47.256 | 99.00th=[34341], 99.50th=[34866], 99.90th=[37487], 99.95th=[37487], 00:16:47.256 | 99.99th=[42206] 00:16:47.256 bw ( KiB/s): min=20480, max=21720, per=34.32%, avg=21100.00, stdev=876.81, samples=2 00:16:47.256 iops : min= 5120, max= 5430, avg=5275.00, stdev=219.20, samples=2 00:16:47.256 lat (msec) : 10=37.19%, 20=57.79%, 50=5.02% 00:16:47.256 cpu : usr=4.06%, sys=6.84%, ctx=537, majf=0, minf=1 00:16:47.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:47.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:47.256 issued rwts: total=5120,5402,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:47.256 job2: (groupid=0, jobs=1): err= 0: pid=2856251: Wed Apr 24 21:32:09 2024 00:16:47.256 read: IOPS=2071, BW=8285KiB/s (8484kB/s)(8360KiB/1009msec) 00:16:47.256 slat (nsec): min=1749, max=19375k, avg=198600.10, stdev=1361618.95 00:16:47.256 clat (usec): min=2178, max=57346, avg=25358.78, stdev=10138.85 00:16:47.256 lat (usec): min=7908, max=57382, avg=25557.38, stdev=10241.23 00:16:47.256 clat percentiles (usec): 00:16:47.256 | 1.00th=[ 8979], 5.00th=[ 9372], 10.00th=[10421], 20.00th=[17433], 00:16:47.256 | 30.00th=[19530], 40.00th=[22152], 50.00th=[23462], 60.00th=[25297], 00:16:47.256 | 70.00th=[29230], 80.00th=[33424], 90.00th=[41157], 95.00th=[44827], 00:16:47.256 | 99.00th=[47449], 99.50th=[47449], 99.90th=[52167], 99.95th=[56361], 00:16:47.256 | 99.99th=[57410] 00:16:47.256 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec); 0 zone resets 00:16:47.256 slat (usec): min=2, max=27491, avg=224.33, stdev=1288.58 00:16:47.256 clat (usec): min=9890, max=67838, avg=29097.81, stdev=12499.50 00:16:47.256 lat (usec): min=9902, max=67869, avg=29322.15, stdev=12564.07 00:16:47.256 clat percentiles (usec): 00:16:47.256 | 1.00th=[ 9896], 5.00th=[13173], 10.00th=[16712], 20.00th=[18482], 00:16:47.256 | 30.00th=[20841], 40.00th=[23462], 50.00th=[25297], 60.00th=[27919], 00:16:47.256 | 70.00th=[34341], 80.00th=[40633], 90.00th=[48497], 95.00th=[55313], 00:16:47.256 | 99.00th=[59507], 99.50th=[60031], 99.90th=[60031], 99.95th=[66323], 00:16:47.256 | 99.99th=[67634] 00:16:47.256 bw ( KiB/s): min= 8504, max=11288, per=16.10%, avg=9896.00, stdev=1968.59, samples=2 00:16:47.256 iops : min= 2126, max= 2822, avg=2474.00, stdev=492.15, samples=2 00:16:47.256 lat (msec) : 4=0.02%, 10=5.03%, 20=22.99%, 50=66.71%, 100=5.25% 00:16:47.256 cpu : usr=1.39%, sys=3.87%, ctx=280, majf=0, minf=1 00:16:47.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:16:47.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:47.256 issued rwts: total=2090,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:47.256 job3: (groupid=0, jobs=1): err= 0: pid=2856259: Wed Apr 24 21:32:09 2024 00:16:47.256 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:16:47.256 slat (nsec): min=1818, max=19365k, avg=142195.61, stdev=1046233.22 00:16:47.256 clat (usec): min=7418, max=53664, avg=19157.55, stdev=8809.12 00:16:47.256 lat (usec): min=7425, max=56094, avg=19299.74, stdev=8896.96 00:16:47.256 clat percentiles (usec): 00:16:47.256 | 1.00th=[ 8455], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[11863], 00:16:47.256 | 30.00th=[14222], 40.00th=[16188], 50.00th=[16712], 60.00th=[18744], 00:16:47.256 | 70.00th=[21103], 80.00th=[23462], 90.00th=[32900], 95.00th=[36963], 00:16:47.256 | 99.00th=[46924], 99.50th=[51119], 99.90th=[53740], 99.95th=[53740], 00:16:47.256 | 99.99th=[53740] 00:16:47.256 write: IOPS=2930, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1008msec); 0 zone resets 00:16:47.256 slat (usec): min=2, max=14734, avg=210.14, stdev=983.63 00:16:47.256 clat (usec): min=1484, max=63656, avg=26635.84, stdev=15430.72 00:16:47.256 lat (usec): min=1498, max=63670, avg=26845.99, stdev=15528.60 00:16:47.256 clat percentiles (usec): 00:16:47.256 | 1.00th=[ 6128], 5.00th=[ 7177], 10.00th=[ 9110], 20.00th=[12911], 00:16:47.256 | 30.00th=[15270], 40.00th=[17957], 50.00th=[21365], 60.00th=[27657], 00:16:47.256 | 70.00th=[36439], 80.00th=[42730], 90.00th=[50594], 95.00th=[54789], 00:16:47.256 | 99.00th=[60556], 99.50th=[62129], 99.90th=[63701], 99.95th=[63701], 00:16:47.256 | 99.99th=[63701] 00:16:47.256 bw ( KiB/s): min= 9504, max=13104, per=18.39%, avg=11304.00, stdev=2545.58, samples=2 00:16:47.256 iops : min= 2376, max= 3276, avg=2826.00, stdev=636.40, samples=2 00:16:47.256 lat (msec) : 2=0.04%, 4=0.18%, 10=10.55%, 20=45.34%, 50=37.87% 00:16:47.256 lat (msec) : 100=6.02% 00:16:47.256 cpu : usr=1.49%, sys=4.17%, ctx=316, majf=0, minf=1 00:16:47.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:47.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:47.256 issued rwts: total=2560,2954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:47.256 00:16:47.256 Run status group 0 (all jobs): 00:16:47.256 READ: bw=54.1MiB/s (56.7MB/s), 8285KiB/s-19.8MiB/s (8484kB/s-20.8MB/s), io=54.7MiB (57.3MB), run=1007-1010msec 00:16:47.256 WRITE: bw=60.0MiB/s (63.0MB/s), 9.91MiB/s-20.9MiB/s (10.4MB/s-21.9MB/s), io=60.6MiB (63.6MB), run=1007-1010msec 00:16:47.256 00:16:47.256 Disk stats (read/write): 00:16:47.256 nvme0n1: ios=3122/3567, merge=0/0, ticks=46539/54689, in_queue=101228, util=86.57% 00:16:47.256 nvme0n2: ios=4146/4301, merge=0/0, ticks=46575/55483, in_queue=102058, util=94.98% 00:16:47.256 nvme0n3: ios=1969/2048, merge=0/0, ticks=19062/25652, in_queue=44714, util=94.67% 00:16:47.256 nvme0n4: ios=2401/2560, merge=0/0, ticks=29146/41324, in_queue=70470, util=96.97% 00:16:47.256 21:32:09 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:47.256 [global] 00:16:47.256 thread=1 00:16:47.256 invalidate=1 00:16:47.256 rw=randwrite 00:16:47.256 time_based=1 00:16:47.256 runtime=1 00:16:47.256 ioengine=libaio 00:16:47.256 direct=1 00:16:47.256 bs=4096 00:16:47.256 iodepth=128 00:16:47.256 norandommap=0 00:16:47.256 numjobs=1 00:16:47.256 00:16:47.256 verify_dump=1 00:16:47.256 verify_backlog=512 00:16:47.256 verify_state_save=0 00:16:47.256 do_verify=1 00:16:47.256 verify=crc32c-intel 00:16:47.256 [job0] 00:16:47.256 filename=/dev/nvme0n1 00:16:47.256 [job1] 00:16:47.256 filename=/dev/nvme0n2 00:16:47.256 [job2] 00:16:47.256 filename=/dev/nvme0n3 00:16:47.256 [job3] 00:16:47.256 filename=/dev/nvme0n4 00:16:47.256 Could not set queue depth (nvme0n1) 00:16:47.256 Could not set queue depth (nvme0n2) 00:16:47.256 Could not set queue depth (nvme0n3) 00:16:47.256 Could not set queue depth (nvme0n4) 00:16:47.514 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:47.514 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:47.514 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:47.514 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:47.514 fio-3.35 00:16:47.514 Starting 4 threads 00:16:48.893 00:16:48.893 job0: (groupid=0, jobs=1): err= 0: pid=2856633: Wed Apr 24 21:32:11 2024 00:16:48.893 read: IOPS=3977, BW=15.5MiB/s (16.3MB/s)(15.6MiB/1007msec) 00:16:48.893 slat (usec): min=2, max=14591, avg=118.66, stdev=840.68 00:16:48.893 clat (usec): min=4919, max=39931, avg=16239.19, stdev=5404.42 00:16:48.893 lat (usec): min=7189, max=39941, avg=16357.85, stdev=5439.62 00:16:48.893 clat percentiles (usec): 00:16:48.893 | 1.00th=[ 7504], 5.00th=[ 8586], 10.00th=[10683], 20.00th=[11338], 00:16:48.893 | 30.00th=[12387], 40.00th=[13829], 50.00th=[15401], 60.00th=[16909], 00:16:48.893 | 70.00th=[19268], 80.00th=[20841], 90.00th=[23200], 95.00th=[25822], 00:16:48.893 | 99.00th=[31851], 99.50th=[34866], 99.90th=[40109], 99.95th=[40109], 00:16:48.893 | 99.99th=[40109] 00:16:48.893 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:16:48.893 slat (usec): min=3, max=13461, avg=118.25, stdev=751.66 00:16:48.893 clat (usec): min=1557, max=33467, avg=15220.44, stdev=4686.88 00:16:48.893 lat (usec): min=1569, max=33482, avg=15338.69, stdev=4697.71 00:16:48.893 clat percentiles (usec): 00:16:48.893 | 1.00th=[ 6915], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10814], 00:16:48.893 | 30.00th=[11731], 40.00th=[13698], 50.00th=[14877], 60.00th=[16188], 00:16:48.893 | 70.00th=[17695], 80.00th=[19530], 90.00th=[21365], 95.00th=[23200], 00:16:48.893 | 99.00th=[28967], 99.50th=[28967], 99.90th=[33162], 99.95th=[33424], 00:16:48.893 | 99.99th=[33424] 00:16:48.893 bw ( KiB/s): min=16351, max=16384, per=28.89%, avg=16367.50, stdev=23.33, samples=2 00:16:48.893 iops : min= 4087, max= 4096, avg=4091.50, stdev= 6.36, samples=2 00:16:48.893 lat (msec) : 2=0.02%, 10=9.68%, 20=69.93%, 50=20.37% 00:16:48.893 cpu : usr=5.77%, sys=5.96%, ctx=325, majf=0, minf=1 00:16:48.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:48.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:48.893 issued rwts: total=4005,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.893 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:48.893 job1: (groupid=0, jobs=1): err= 0: pid=2856657: Wed Apr 24 21:32:11 2024 00:16:48.893 read: IOPS=3139, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1009msec) 00:16:48.893 slat (usec): min=2, max=20551, avg=137.93, stdev=1059.07 00:16:48.893 clat (usec): min=2148, max=43826, avg=17966.39, stdev=5262.47 00:16:48.893 lat (usec): min=8681, max=43836, avg=18104.32, stdev=5306.40 00:16:48.893 clat percentiles (usec): 00:16:48.893 | 1.00th=[ 9110], 5.00th=[11076], 10.00th=[11994], 20.00th=[13304], 00:16:48.893 | 30.00th=[14746], 40.00th=[15664], 50.00th=[17171], 60.00th=[19006], 00:16:48.893 | 70.00th=[20055], 80.00th=[22676], 90.00th=[25560], 95.00th=[27657], 00:16:48.893 | 99.00th=[35390], 99.50th=[35390], 99.90th=[43779], 99.95th=[43779], 00:16:48.893 | 99.99th=[43779] 00:16:48.893 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:16:48.893 slat (usec): min=3, max=12874, avg=145.66, stdev=873.98 00:16:48.893 clat (usec): min=1958, max=101200, avg=19755.82, stdev=18672.84 00:16:48.893 lat (usec): min=1974, max=101206, avg=19901.48, stdev=18768.89 00:16:48.893 clat percentiles (msec): 00:16:48.893 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 11], 20.00th=[ 12], 00:16:48.893 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 16], 00:16:48.893 | 70.00th=[ 18], 80.00th=[ 20], 90.00th=[ 28], 95.00th=[ 79], 00:16:48.893 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 102], 99.95th=[ 102], 00:16:48.893 | 99.99th=[ 102] 00:16:48.893 bw ( KiB/s): min=11856, max=16351, per=24.90%, avg=14103.50, stdev=3178.44, samples=2 00:16:48.893 iops : min= 2964, max= 4087, avg=3525.50, stdev=794.08, samples=2 00:16:48.893 lat (msec) : 2=0.03%, 4=0.01%, 10=5.29%, 20=72.25%, 50=18.41% 00:16:48.893 lat (msec) : 100=3.92%, 250=0.09% 00:16:48.893 cpu : usr=3.67%, sys=5.56%, ctx=272, majf=0, minf=1 00:16:48.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:48.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:48.893 issued rwts: total=3168,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.893 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:48.893 job2: (groupid=0, jobs=1): err= 0: pid=2856679: Wed Apr 24 21:32:11 2024 00:16:48.893 read: IOPS=4530, BW=17.7MiB/s (18.6MB/s)(18.0MiB/1017msec) 00:16:48.893 slat (usec): min=2, max=7033, avg=90.15, stdev=515.76 00:16:48.893 clat (usec): min=2972, max=22616, avg=11087.87, stdev=3095.77 00:16:48.893 lat (usec): min=6748, max=22620, avg=11178.02, stdev=3131.17 00:16:48.893 clat percentiles (usec): 00:16:48.893 | 1.00th=[ 6980], 5.00th=[ 7373], 10.00th=[ 7701], 20.00th=[ 8717], 00:16:48.893 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[11076], 00:16:48.893 | 70.00th=[11863], 80.00th=[13173], 90.00th=[15795], 95.00th=[17695], 00:16:48.893 | 99.00th=[20579], 99.50th=[21365], 99.90th=[21890], 99.95th=[21890], 00:16:48.893 | 99.99th=[22676] 00:16:48.893 write: IOPS=4889, BW=19.1MiB/s (20.0MB/s)(19.4MiB/1017msec); 0 zone resets 00:16:48.893 slat (usec): min=3, max=6928, avg=111.72, stdev=413.58 00:16:48.893 clat (usec): min=1904, max=33768, avg=15661.90, stdev=4765.97 00:16:48.893 lat (usec): min=1920, max=33773, avg=15773.63, stdev=4787.86 00:16:48.893 clat percentiles (usec): 00:16:48.893 | 1.00th=[ 5669], 5.00th=[ 6390], 10.00th=[ 8029], 20.00th=[10552], 00:16:48.893 | 30.00th=[14615], 40.00th=[16909], 50.00th=[17433], 60.00th=[17957], 00:16:48.893 | 70.00th=[18220], 80.00th=[18482], 90.00th=[19530], 95.00th=[21627], 00:16:48.893 | 99.00th=[26870], 99.50th=[30278], 99.90th=[33817], 99.95th=[33817], 00:16:48.893 | 99.99th=[33817] 00:16:48.893 bw ( KiB/s): min=18288, max=20480, per=34.22%, avg=19384.00, stdev=1549.98, samples=2 00:16:48.893 iops : min= 4572, max= 5120, avg=4846.00, stdev=387.49, samples=2 00:16:48.893 lat (msec) : 2=0.02%, 4=0.15%, 10=32.70%, 20=61.52%, 50=5.62% 00:16:48.893 cpu : usr=3.64%, sys=5.22%, ctx=761, majf=0, minf=1 00:16:48.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:48.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:48.893 issued rwts: total=4608,4973,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.893 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:48.893 job3: (groupid=0, jobs=1): err= 0: pid=2856687: Wed Apr 24 21:32:11 2024 00:16:48.893 read: IOPS=1516, BW=6065KiB/s (6211kB/s)(6144KiB/1013msec) 00:16:48.894 slat (nsec): min=1781, max=30755k, avg=351571.41, stdev=2167146.35 00:16:48.894 clat (msec): min=10, max=102, avg=40.25, stdev=27.19 00:16:48.894 lat (msec): min=10, max=102, avg=40.61, stdev=27.38 00:16:48.894 clat percentiles (msec): 00:16:48.894 | 1.00th=[ 11], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 19], 00:16:48.894 | 30.00th=[ 20], 40.00th=[ 23], 50.00th=[ 30], 60.00th=[ 36], 00:16:48.894 | 70.00th=[ 43], 80.00th=[ 75], 90.00th=[ 91], 95.00th=[ 94], 00:16:48.894 | 99.00th=[ 97], 99.50th=[ 101], 99.90th=[ 103], 99.95th=[ 103], 00:16:48.894 | 99.99th=[ 103] 00:16:48.894 write: IOPS=1727, BW=6910KiB/s (7076kB/s)(7000KiB/1013msec); 0 zone resets 00:16:48.894 slat (usec): min=2, max=19517, avg=244.69, stdev=1316.71 00:16:48.894 clat (msec): min=3, max=146, avg=38.08, stdev=27.50 00:16:48.894 lat (msec): min=3, max=146, avg=38.33, stdev=27.57 00:16:48.894 clat percentiles (msec): 00:16:48.894 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 15], 00:16:48.894 | 30.00th=[ 17], 40.00th=[ 21], 50.00th=[ 33], 60.00th=[ 42], 00:16:48.894 | 70.00th=[ 48], 80.00th=[ 58], 90.00th=[ 81], 95.00th=[ 88], 00:16:48.894 | 99.00th=[ 131], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:16:48.894 | 99.99th=[ 146] 00:16:48.894 bw ( KiB/s): min= 6200, max= 6770, per=11.45%, avg=6485.00, stdev=403.05, samples=2 00:16:48.894 iops : min= 1550, max= 1692, avg=1621.00, stdev=100.41, samples=2 00:16:48.894 lat (msec) : 4=0.27%, 10=1.34%, 20=36.31%, 50=34.94%, 100=25.35% 00:16:48.894 lat (msec) : 250=1.80% 00:16:48.894 cpu : usr=1.28%, sys=2.96%, ctx=192, majf=0, minf=1 00:16:48.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:16:48.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:48.894 issued rwts: total=1536,1750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:48.894 00:16:48.894 Run status group 0 (all jobs): 00:16:48.894 READ: bw=51.1MiB/s (53.6MB/s), 6065KiB/s-17.7MiB/s (6211kB/s-18.6MB/s), io=52.0MiB (54.5MB), run=1007-1017msec 00:16:48.894 WRITE: bw=55.3MiB/s (58.0MB/s), 6910KiB/s-19.1MiB/s (7076kB/s-20.0MB/s), io=56.3MiB (59.0MB), run=1007-1017msec 00:16:48.894 00:16:48.894 Disk stats (read/write): 00:16:48.894 nvme0n1: ios=3089/3293, merge=0/0, ticks=51353/50588, in_queue=101941, util=99.00% 00:16:48.894 nvme0n2: ios=3096/3247, merge=0/0, ticks=54724/45757, in_queue=100481, util=99.08% 00:16:48.894 nvme0n3: ios=3613/4073, merge=0/0, ticks=39195/60690, in_queue=99885, util=89.40% 00:16:48.894 nvme0n4: ios=989/1024, merge=0/0, ticks=28486/43368, in_queue=71854, util=89.36% 00:16:48.894 21:32:11 -- target/fio.sh@55 -- # sync 00:16:48.894 21:32:11 -- target/fio.sh@59 -- # fio_pid=2856774 00:16:48.894 21:32:11 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:48.894 21:32:11 -- target/fio.sh@61 -- # sleep 3 00:16:48.894 [global] 00:16:48.894 thread=1 00:16:48.894 invalidate=1 00:16:48.894 rw=read 00:16:48.894 time_based=1 00:16:48.894 runtime=10 00:16:48.894 ioengine=libaio 00:16:48.894 direct=1 00:16:48.894 bs=4096 00:16:48.894 iodepth=1 00:16:48.894 norandommap=1 00:16:48.894 numjobs=1 00:16:48.894 00:16:48.894 [job0] 00:16:48.894 filename=/dev/nvme0n1 00:16:48.894 [job1] 00:16:48.894 filename=/dev/nvme0n2 00:16:48.894 [job2] 00:16:48.894 filename=/dev/nvme0n3 00:16:48.894 [job3] 00:16:48.894 filename=/dev/nvme0n4 00:16:48.894 Could not set queue depth (nvme0n1) 00:16:48.894 Could not set queue depth (nvme0n2) 00:16:48.894 Could not set queue depth (nvme0n3) 00:16:48.894 Could not set queue depth (nvme0n4) 00:16:49.152 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.152 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.152 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.152 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.152 fio-3.35 00:16:49.152 Starting 4 threads 00:16:52.445 21:32:14 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:52.445 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=19394560, buflen=4096 00:16:52.445 fio: pid=2857125, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:52.445 21:32:14 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:52.445 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=778240, buflen=4096 00:16:52.445 fio: pid=2857124, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:52.445 21:32:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:52.445 21:32:15 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:52.445 21:32:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:52.445 21:32:15 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:52.445 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=294912, buflen=4096 00:16:52.445 fio: pid=2857090, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:52.705 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=1863680, buflen=4096 00:16:52.705 fio: pid=2857112, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:52.705 21:32:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:52.705 21:32:15 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:52.705 00:16:52.705 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2857090: Wed Apr 24 21:32:15 2024 00:16:52.705 read: IOPS=24, BW=95.7KiB/s (98.0kB/s)(288KiB/3010msec) 00:16:52.705 slat (usec): min=12, max=9565, avg=156.46, stdev=1116.54 00:16:52.705 clat (usec): min=1128, max=42979, avg=41349.01, stdev=4816.70 00:16:52.705 lat (usec): min=1191, max=51033, avg=41507.27, stdev=4945.01 00:16:52.705 clat percentiles (usec): 00:16:52.705 | 1.00th=[ 1123], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:16:52.705 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:52.705 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:52.705 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:52.705 | 99.99th=[42730] 00:16:52.705 bw ( KiB/s): min= 96, max= 96, per=1.40%, avg=96.00, stdev= 0.00, samples=5 00:16:52.705 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:16:52.705 lat (msec) : 2=1.37%, 50=97.26% 00:16:52.705 cpu : usr=0.13%, sys=0.00%, ctx=76, majf=0, minf=1 00:16:52.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.705 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.705 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.705 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2857112: Wed Apr 24 21:32:15 2024 00:16:52.705 read: IOPS=143, BW=571KiB/s (585kB/s)(1820KiB/3187msec) 00:16:52.705 slat (usec): min=5, max=12610, avg=91.50, stdev=889.94 00:16:52.705 clat (usec): min=572, max=42975, avg=6860.58, stdev=14585.53 00:16:52.705 lat (usec): min=582, max=54059, avg=6952.23, stdev=14714.48 00:16:52.705 clat percentiles (usec): 00:16:52.705 | 1.00th=[ 603], 5.00th=[ 635], 10.00th=[ 676], 20.00th=[ 734], 00:16:52.705 | 30.00th=[ 766], 40.00th=[ 799], 50.00th=[ 824], 60.00th=[ 857], 00:16:52.705 | 70.00th=[ 898], 80.00th=[ 963], 90.00th=[42206], 95.00th=[42206], 00:16:52.705 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:16:52.705 | 99.99th=[42730] 00:16:52.705 bw ( KiB/s): min= 92, max= 3128, per=8.77%, avg=600.67, stdev=1238.14, samples=6 00:16:52.705 iops : min= 23, max= 782, avg=150.17, stdev=309.53, samples=6 00:16:52.706 lat (usec) : 750=23.46%, 1000=59.43% 00:16:52.706 lat (msec) : 2=2.19%, 50=14.69% 00:16:52.706 cpu : usr=0.16%, sys=0.31%, ctx=462, majf=0, minf=1 00:16:52.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.706 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.706 issued rwts: total=456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.706 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2857124: Wed Apr 24 21:32:15 2024 00:16:52.706 read: IOPS=68, BW=272KiB/s (278kB/s)(760KiB/2796msec) 00:16:52.706 slat (nsec): min=8829, max=34932, avg=15402.99, stdev=7899.24 00:16:52.706 clat (usec): min=526, max=42962, avg=14585.90, stdev=19591.76 00:16:52.706 lat (usec): min=536, max=42988, avg=14601.25, stdev=19598.76 00:16:52.706 clat percentiles (usec): 00:16:52.706 | 1.00th=[ 529], 5.00th=[ 545], 10.00th=[ 553], 20.00th=[ 570], 00:16:52.706 | 30.00th=[ 603], 40.00th=[ 676], 50.00th=[ 766], 60.00th=[ 799], 00:16:52.706 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:52.706 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:52.706 | 99.99th=[42730] 00:16:52.706 bw ( KiB/s): min= 96, max= 1080, per=4.27%, avg=292.80, stdev=440.06, samples=5 00:16:52.706 iops : min= 24, max= 270, avg=73.20, stdev=110.01, samples=5 00:16:52.706 lat (usec) : 750=46.07%, 1000=19.37% 00:16:52.706 lat (msec) : 2=0.52%, 50=33.51% 00:16:52.706 cpu : usr=0.00%, sys=0.14%, ctx=192, majf=0, minf=1 00:16:52.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.706 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.706 issued rwts: total=191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.706 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2857125: Wed Apr 24 21:32:15 2024 00:16:52.706 read: IOPS=1818, BW=7273KiB/s (7448kB/s)(18.5MiB/2604msec) 00:16:52.706 slat (nsec): min=8685, max=50319, avg=10300.01, stdev=3443.42 00:16:52.706 clat (usec): min=438, max=1991, avg=533.23, stdev=75.65 00:16:52.706 lat (usec): min=447, max=2003, avg=543.53, stdev=76.91 00:16:52.706 clat percentiles (usec): 00:16:52.706 | 1.00th=[ 465], 5.00th=[ 478], 10.00th=[ 486], 20.00th=[ 498], 00:16:52.706 | 30.00th=[ 502], 40.00th=[ 510], 50.00th=[ 515], 60.00th=[ 523], 00:16:52.706 | 70.00th=[ 537], 80.00th=[ 553], 90.00th=[ 594], 95.00th=[ 668], 00:16:52.706 | 99.00th=[ 742], 99.50th=[ 758], 99.90th=[ 1713], 99.95th=[ 1860], 00:16:52.706 | 99.99th=[ 1991] 00:16:52.706 bw ( KiB/s): min= 7080, max= 7656, per=100.00%, avg=7337.60, stdev=237.75, samples=5 00:16:52.706 iops : min= 1770, max= 1914, avg=1834.40, stdev=59.44, samples=5 00:16:52.706 lat (usec) : 500=24.73%, 750=74.49%, 1000=0.55% 00:16:52.706 lat (msec) : 2=0.21% 00:16:52.706 cpu : usr=0.96%, sys=3.53%, ctx=4738, majf=0, minf=2 00:16:52.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.706 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.706 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.706 00:16:52.706 Run status group 0 (all jobs): 00:16:52.706 READ: bw=6843KiB/s (7007kB/s), 95.7KiB/s-7273KiB/s (98.0kB/s-7448kB/s), io=21.3MiB (22.3MB), run=2604-3187msec 00:16:52.706 00:16:52.706 Disk stats (read/write): 00:16:52.706 nvme0n1: ios=68/0, merge=0/0, ticks=2812/0, in_queue=2812, util=94.05% 00:16:52.706 nvme0n2: ios=484/0, merge=0/0, ticks=3900/0, in_queue=3900, util=98.85% 00:16:52.706 nvme0n3: ios=224/0, merge=0/0, ticks=3447/0, in_queue=3447, util=100.00% 00:16:52.706 nvme0n4: ios=4700/0, merge=0/0, ticks=2473/0, in_queue=2473, util=96.47% 00:16:52.966 21:32:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:52.966 21:32:15 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:52.966 21:32:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:52.966 21:32:15 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:53.226 21:32:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:53.226 21:32:15 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:53.525 21:32:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:53.525 21:32:16 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:53.525 21:32:16 -- target/fio.sh@69 -- # fio_status=0 00:16:53.525 21:32:16 -- target/fio.sh@70 -- # wait 2856774 00:16:53.525 21:32:16 -- target/fio.sh@70 -- # fio_status=4 00:16:53.525 21:32:16 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:53.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.785 21:32:16 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:53.785 21:32:16 -- common/autotest_common.sh@1205 -- # local i=0 00:16:53.785 21:32:16 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:53.785 21:32:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.785 21:32:16 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:53.785 21:32:16 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.785 21:32:16 -- common/autotest_common.sh@1217 -- # return 0 00:16:53.785 21:32:16 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:53.785 21:32:16 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:53.785 nvmf hotplug test: fio failed as expected 00:16:53.785 21:32:16 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.785 21:32:16 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:53.785 21:32:16 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:53.785 21:32:16 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:53.785 21:32:16 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:53.785 21:32:16 -- target/fio.sh@91 -- # nvmftestfini 00:16:53.785 21:32:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:53.785 21:32:16 -- nvmf/common.sh@117 -- # sync 00:16:54.045 21:32:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:54.045 21:32:16 -- nvmf/common.sh@120 -- # set +e 00:16:54.045 21:32:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:54.045 21:32:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:54.045 rmmod nvme_tcp 00:16:54.045 rmmod nvme_fabrics 00:16:54.045 rmmod nvme_keyring 00:16:54.045 21:32:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:54.045 21:32:16 -- nvmf/common.sh@124 -- # set -e 00:16:54.045 21:32:16 -- nvmf/common.sh@125 -- # return 0 00:16:54.045 21:32:16 -- nvmf/common.sh@478 -- # '[' -n 2853868 ']' 00:16:54.045 21:32:16 -- nvmf/common.sh@479 -- # killprocess 2853868 00:16:54.045 21:32:16 -- common/autotest_common.sh@936 -- # '[' -z 2853868 ']' 00:16:54.045 21:32:16 -- common/autotest_common.sh@940 -- # kill -0 2853868 00:16:54.045 21:32:16 -- common/autotest_common.sh@941 -- # uname 00:16:54.045 21:32:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:54.045 21:32:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2853868 00:16:54.045 21:32:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:54.045 21:32:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:54.045 21:32:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2853868' 00:16:54.045 killing process with pid 2853868 00:16:54.045 21:32:16 -- common/autotest_common.sh@955 -- # kill 2853868 00:16:54.045 21:32:16 -- common/autotest_common.sh@960 -- # wait 2853868 00:16:54.305 21:32:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:54.305 21:32:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:54.305 21:32:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:54.305 21:32:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:54.305 21:32:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:54.305 21:32:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.305 21:32:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.305 21:32:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.212 21:32:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:56.212 00:16:56.212 real 0m28.331s 00:16:56.212 user 2m2.580s 00:16:56.212 sys 0m9.854s 00:16:56.212 21:32:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:56.212 21:32:19 -- common/autotest_common.sh@10 -- # set +x 00:16:56.212 ************************************ 00:16:56.212 END TEST nvmf_fio_target 00:16:56.212 ************************************ 00:16:56.212 21:32:19 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:56.212 21:32:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:56.212 21:32:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:56.212 21:32:19 -- common/autotest_common.sh@10 -- # set +x 00:16:56.472 ************************************ 00:16:56.472 START TEST nvmf_bdevio 00:16:56.472 ************************************ 00:16:56.472 21:32:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:56.472 * Looking for test storage... 00:16:56.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.472 21:32:19 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.472 21:32:19 -- nvmf/common.sh@7 -- # uname -s 00:16:56.472 21:32:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.472 21:32:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.472 21:32:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.472 21:32:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.472 21:32:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.472 21:32:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.472 21:32:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.472 21:32:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.472 21:32:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.472 21:32:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.472 21:32:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:56.472 21:32:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:56.472 21:32:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.472 21:32:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.472 21:32:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.472 21:32:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.472 21:32:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.472 21:32:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.472 21:32:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.472 21:32:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.472 21:32:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.472 21:32:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.472 21:32:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.472 21:32:19 -- paths/export.sh@5 -- # export PATH 00:16:56.472 21:32:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.472 21:32:19 -- nvmf/common.sh@47 -- # : 0 00:16:56.472 21:32:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.472 21:32:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.472 21:32:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.472 21:32:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.472 21:32:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.472 21:32:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.472 21:32:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.472 21:32:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.472 21:32:19 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:56.472 21:32:19 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:56.472 21:32:19 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:56.472 21:32:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:56.472 21:32:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.472 21:32:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:56.472 21:32:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:56.472 21:32:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:56.472 21:32:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.472 21:32:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.472 21:32:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.472 21:32:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:56.472 21:32:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:56.472 21:32:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:56.472 21:32:19 -- common/autotest_common.sh@10 -- # set +x 00:17:03.055 21:32:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:03.055 21:32:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:03.055 21:32:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:03.055 21:32:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:03.055 21:32:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:03.055 21:32:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:03.055 21:32:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:03.055 21:32:25 -- nvmf/common.sh@295 -- # net_devs=() 00:17:03.055 21:32:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:03.055 21:32:25 -- nvmf/common.sh@296 -- # e810=() 00:17:03.055 21:32:25 -- nvmf/common.sh@296 -- # local -ga e810 00:17:03.055 21:32:25 -- nvmf/common.sh@297 -- # x722=() 00:17:03.055 21:32:25 -- nvmf/common.sh@297 -- # local -ga x722 00:17:03.055 21:32:25 -- nvmf/common.sh@298 -- # mlx=() 00:17:03.055 21:32:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:03.055 21:32:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:03.055 21:32:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:03.055 21:32:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:03.055 21:32:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:03.055 21:32:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:03.055 21:32:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:03.055 21:32:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:03.055 21:32:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:03.055 21:32:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:03.055 21:32:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:03.055 21:32:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:03.055 21:32:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:03.055 21:32:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:03.055 21:32:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:03.055 21:32:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:03.055 21:32:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:03.055 21:32:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:03.055 21:32:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:03.055 21:32:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:03.055 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:03.055 21:32:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:03.055 21:32:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:03.055 21:32:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.055 21:32:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.055 21:32:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:03.055 21:32:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:03.055 21:32:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:03.055 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:03.055 21:32:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:03.055 21:32:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:03.055 21:32:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.055 21:32:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.055 21:32:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:03.055 21:32:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:03.055 21:32:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:03.055 21:32:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:03.055 21:32:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:03.055 21:32:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.055 21:32:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:03.055 21:32:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.055 21:32:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:03.055 Found net devices under 0000:af:00.0: cvl_0_0 00:17:03.055 21:32:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.055 21:32:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:03.055 21:32:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.055 21:32:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:03.055 21:32:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.055 21:32:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:03.055 Found net devices under 0000:af:00.1: cvl_0_1 00:17:03.055 21:32:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.055 21:32:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:03.055 21:32:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:03.055 21:32:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:03.055 21:32:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:03.055 21:32:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:03.055 21:32:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.055 21:32:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.055 21:32:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:03.055 21:32:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:03.055 21:32:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:03.055 21:32:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:03.055 21:32:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:03.055 21:32:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:03.055 21:32:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.055 21:32:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:03.055 21:32:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:03.055 21:32:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:03.055 21:32:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:03.055 21:32:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:03.055 21:32:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:03.055 21:32:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:03.055 21:32:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:03.055 21:32:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:03.055 21:32:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:03.055 21:32:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:03.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:17:03.055 00:17:03.055 --- 10.0.0.2 ping statistics --- 00:17:03.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.055 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:17:03.055 21:32:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:03.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:17:03.055 00:17:03.055 --- 10.0.0.1 ping statistics --- 00:17:03.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.055 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:17:03.055 21:32:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.055 21:32:25 -- nvmf/common.sh@411 -- # return 0 00:17:03.055 21:32:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:03.055 21:32:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.055 21:32:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:03.055 21:32:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:03.055 21:32:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.055 21:32:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:03.055 21:32:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:03.055 21:32:25 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:03.055 21:32:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:03.055 21:32:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:03.055 21:32:25 -- common/autotest_common.sh@10 -- # set +x 00:17:03.055 21:32:25 -- nvmf/common.sh@470 -- # nvmfpid=2861504 00:17:03.056 21:32:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:03.056 21:32:25 -- nvmf/common.sh@471 -- # waitforlisten 2861504 00:17:03.056 21:32:25 -- common/autotest_common.sh@817 -- # '[' -z 2861504 ']' 00:17:03.056 21:32:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.056 21:32:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:03.056 21:32:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.056 21:32:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:03.056 21:32:25 -- common/autotest_common.sh@10 -- # set +x 00:17:03.315 [2024-04-24 21:32:25.985324] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:17:03.315 [2024-04-24 21:32:25.985369] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.315 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.315 [2024-04-24 21:32:26.058320] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:03.315 [2024-04-24 21:32:26.130187] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.315 [2024-04-24 21:32:26.130224] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.315 [2024-04-24 21:32:26.130233] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.315 [2024-04-24 21:32:26.130242] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.315 [2024-04-24 21:32:26.130249] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.315 [2024-04-24 21:32:26.130366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:03.315 [2024-04-24 21:32:26.130493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:03.315 [2024-04-24 21:32:26.130600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:03.315 [2024-04-24 21:32:26.130601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:04.254 21:32:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:04.254 21:32:26 -- common/autotest_common.sh@850 -- # return 0 00:17:04.254 21:32:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:04.254 21:32:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:04.254 21:32:26 -- common/autotest_common.sh@10 -- # set +x 00:17:04.254 21:32:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.254 21:32:26 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:04.254 21:32:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.254 21:32:26 -- common/autotest_common.sh@10 -- # set +x 00:17:04.254 [2024-04-24 21:32:26.850320] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.254 21:32:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.254 21:32:26 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:04.254 21:32:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.254 21:32:26 -- common/autotest_common.sh@10 -- # set +x 00:17:04.254 Malloc0 00:17:04.254 21:32:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.254 21:32:26 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:04.254 21:32:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.254 21:32:26 -- common/autotest_common.sh@10 -- # set +x 00:17:04.254 21:32:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.254 21:32:26 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:04.254 21:32:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.254 21:32:26 -- common/autotest_common.sh@10 -- # set +x 00:17:04.254 21:32:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.254 21:32:26 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.254 21:32:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.254 21:32:26 -- common/autotest_common.sh@10 -- # set +x 00:17:04.254 [2024-04-24 21:32:26.904553] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.254 21:32:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.254 21:32:26 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:04.254 21:32:26 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:04.254 21:32:26 -- nvmf/common.sh@521 -- # config=() 00:17:04.254 21:32:26 -- nvmf/common.sh@521 -- # local subsystem config 00:17:04.254 21:32:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:04.254 21:32:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:04.254 { 00:17:04.254 "params": { 00:17:04.254 "name": "Nvme$subsystem", 00:17:04.254 "trtype": "$TEST_TRANSPORT", 00:17:04.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:04.254 "adrfam": "ipv4", 00:17:04.254 "trsvcid": "$NVMF_PORT", 00:17:04.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:04.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:04.254 "hdgst": ${hdgst:-false}, 00:17:04.254 "ddgst": ${ddgst:-false} 00:17:04.254 }, 00:17:04.254 "method": "bdev_nvme_attach_controller" 00:17:04.254 } 00:17:04.254 EOF 00:17:04.254 )") 00:17:04.254 21:32:26 -- nvmf/common.sh@543 -- # cat 00:17:04.254 21:32:26 -- nvmf/common.sh@545 -- # jq . 00:17:04.254 21:32:26 -- nvmf/common.sh@546 -- # IFS=, 00:17:04.254 21:32:26 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:04.254 "params": { 00:17:04.254 "name": "Nvme1", 00:17:04.254 "trtype": "tcp", 00:17:04.254 "traddr": "10.0.0.2", 00:17:04.254 "adrfam": "ipv4", 00:17:04.254 "trsvcid": "4420", 00:17:04.254 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:04.254 "hdgst": false, 00:17:04.254 "ddgst": false 00:17:04.254 }, 00:17:04.254 "method": "bdev_nvme_attach_controller" 00:17:04.254 }' 00:17:04.254 [2024-04-24 21:32:26.955522] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:17:04.254 [2024-04-24 21:32:26.955570] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861681 ] 00:17:04.254 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.254 [2024-04-24 21:32:27.026136] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:04.254 [2024-04-24 21:32:27.098076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.254 [2024-04-24 21:32:27.098171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.254 [2024-04-24 21:32:27.098173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.515 I/O targets: 00:17:04.515 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:04.515 00:17:04.515 00:17:04.515 CUnit - A unit testing framework for C - Version 2.1-3 00:17:04.515 http://cunit.sourceforge.net/ 00:17:04.515 00:17:04.515 00:17:04.515 Suite: bdevio tests on: Nvme1n1 00:17:04.515 Test: blockdev write read block ...passed 00:17:04.515 Test: blockdev write zeroes read block ...passed 00:17:04.515 Test: blockdev write zeroes read no split ...passed 00:17:04.774 Test: blockdev write zeroes read split ...passed 00:17:04.774 Test: blockdev write zeroes read split partial ...passed 00:17:04.774 Test: blockdev reset ...[2024-04-24 21:32:27.494256] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:04.774 [2024-04-24 21:32:27.494318] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99b8d0 (9): Bad file descriptor 00:17:04.774 [2024-04-24 21:32:27.522966] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:04.774 passed 00:17:04.774 Test: blockdev write read 8 blocks ...passed 00:17:04.774 Test: blockdev write read size > 128k ...passed 00:17:04.774 Test: blockdev write read invalid size ...passed 00:17:04.774 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:04.774 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:04.774 Test: blockdev write read max offset ...passed 00:17:04.774 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:05.034 Test: blockdev writev readv 8 blocks ...passed 00:17:05.034 Test: blockdev writev readv 30 x 1block ...passed 00:17:05.034 Test: blockdev writev readv block ...passed 00:17:05.034 Test: blockdev writev readv size > 128k ...passed 00:17:05.034 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:05.034 Test: blockdev comparev and writev ...[2024-04-24 21:32:27.751043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.034 [2024-04-24 21:32:27.751077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:05.034 [2024-04-24 21:32:27.751093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.034 [2024-04-24 21:32:27.751103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.034 [2024-04-24 21:32:27.751685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.034 [2024-04-24 21:32:27.751699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:05.034 [2024-04-24 21:32:27.751713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.034 [2024-04-24 21:32:27.751723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:05.034 [2024-04-24 21:32:27.752205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.034 [2024-04-24 21:32:27.752219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:05.034 [2024-04-24 21:32:27.752233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.034 [2024-04-24 21:32:27.752243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:05.034 [2024-04-24 21:32:27.752709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.034 [2024-04-24 21:32:27.752722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:05.034 [2024-04-24 21:32:27.752736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.034 [2024-04-24 21:32:27.752746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:05.034 passed 00:17:05.034 Test: blockdev nvme passthru rw ...passed 00:17:05.034 Test: blockdev nvme passthru vendor specific ...[2024-04-24 21:32:27.836276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:05.034 [2024-04-24 21:32:27.836293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:05.034 [2024-04-24 21:32:27.836647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:05.034 [2024-04-24 21:32:27.836661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:05.034 [2024-04-24 21:32:27.837011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:05.034 [2024-04-24 21:32:27.837024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:05.034 [2024-04-24 21:32:27.837372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:05.034 [2024-04-24 21:32:27.837385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:05.034 passed 00:17:05.034 Test: blockdev nvme admin passthru ...passed 00:17:05.034 Test: blockdev copy ...passed 00:17:05.034 00:17:05.034 Run Summary: Type Total Ran Passed Failed Inactive 00:17:05.034 suites 1 1 n/a 0 0 00:17:05.034 tests 23 23 23 0 0 00:17:05.034 asserts 152 152 152 0 n/a 00:17:05.034 00:17:05.034 Elapsed time = 1.285 seconds 00:17:05.293 21:32:28 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:05.293 21:32:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.293 21:32:28 -- common/autotest_common.sh@10 -- # set +x 00:17:05.293 21:32:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.293 21:32:28 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:05.293 21:32:28 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:05.293 21:32:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:05.293 21:32:28 -- nvmf/common.sh@117 -- # sync 00:17:05.293 21:32:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:05.293 21:32:28 -- nvmf/common.sh@120 -- # set +e 00:17:05.293 21:32:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:05.293 21:32:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:05.294 rmmod nvme_tcp 00:17:05.294 rmmod nvme_fabrics 00:17:05.294 rmmod nvme_keyring 00:17:05.294 21:32:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:05.294 21:32:28 -- nvmf/common.sh@124 -- # set -e 00:17:05.294 21:32:28 -- nvmf/common.sh@125 -- # return 0 00:17:05.294 21:32:28 -- nvmf/common.sh@478 -- # '[' -n 2861504 ']' 00:17:05.294 21:32:28 -- nvmf/common.sh@479 -- # killprocess 2861504 00:17:05.294 21:32:28 -- common/autotest_common.sh@936 -- # '[' -z 2861504 ']' 00:17:05.294 21:32:28 -- common/autotest_common.sh@940 -- # kill -0 2861504 00:17:05.294 21:32:28 -- common/autotest_common.sh@941 -- # uname 00:17:05.294 21:32:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:05.294 21:32:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2861504 00:17:05.554 21:32:28 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:05.554 21:32:28 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:05.554 21:32:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2861504' 00:17:05.554 killing process with pid 2861504 00:17:05.554 21:32:28 -- common/autotest_common.sh@955 -- # kill 2861504 00:17:05.554 21:32:28 -- common/autotest_common.sh@960 -- # wait 2861504 00:17:05.554 21:32:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:05.554 21:32:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:05.554 21:32:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:05.554 21:32:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:05.554 21:32:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:05.554 21:32:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.554 21:32:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.554 21:32:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.095 21:32:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:08.095 00:17:08.095 real 0m11.267s 00:17:08.095 user 0m12.887s 00:17:08.095 sys 0m5.703s 00:17:08.095 21:32:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:08.095 21:32:30 -- common/autotest_common.sh@10 -- # set +x 00:17:08.095 ************************************ 00:17:08.095 END TEST nvmf_bdevio 00:17:08.095 ************************************ 00:17:08.095 21:32:30 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:17:08.095 21:32:30 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:08.095 21:32:30 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:08.095 21:32:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:08.095 21:32:30 -- common/autotest_common.sh@10 -- # set +x 00:17:08.095 ************************************ 00:17:08.095 START TEST nvmf_bdevio_no_huge 00:17:08.095 ************************************ 00:17:08.095 21:32:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:08.095 * Looking for test storage... 00:17:08.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.095 21:32:30 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.095 21:32:30 -- nvmf/common.sh@7 -- # uname -s 00:17:08.095 21:32:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.095 21:32:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.095 21:32:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.095 21:32:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.095 21:32:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.095 21:32:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.095 21:32:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.095 21:32:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.095 21:32:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.095 21:32:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.096 21:32:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:08.096 21:32:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:08.096 21:32:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.096 21:32:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.096 21:32:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.096 21:32:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.096 21:32:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.096 21:32:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.096 21:32:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.096 21:32:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.096 21:32:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.096 21:32:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.096 21:32:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.096 21:32:30 -- paths/export.sh@5 -- # export PATH 00:17:08.096 21:32:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.096 21:32:30 -- nvmf/common.sh@47 -- # : 0 00:17:08.096 21:32:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.096 21:32:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.096 21:32:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.096 21:32:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.096 21:32:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.096 21:32:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.096 21:32:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.096 21:32:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.096 21:32:30 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:08.096 21:32:30 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:08.096 21:32:30 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:08.096 21:32:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:08.096 21:32:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.096 21:32:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:08.096 21:32:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:08.096 21:32:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:08.096 21:32:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.096 21:32:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.096 21:32:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.096 21:32:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:08.096 21:32:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:08.096 21:32:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:08.096 21:32:30 -- common/autotest_common.sh@10 -- # set +x 00:17:14.675 21:32:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:14.675 21:32:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:14.675 21:32:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:14.675 21:32:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:14.675 21:32:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:14.675 21:32:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:14.675 21:32:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:14.675 21:32:37 -- nvmf/common.sh@295 -- # net_devs=() 00:17:14.675 21:32:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:14.675 21:32:37 -- nvmf/common.sh@296 -- # e810=() 00:17:14.675 21:32:37 -- nvmf/common.sh@296 -- # local -ga e810 00:17:14.675 21:32:37 -- nvmf/common.sh@297 -- # x722=() 00:17:14.675 21:32:37 -- nvmf/common.sh@297 -- # local -ga x722 00:17:14.675 21:32:37 -- nvmf/common.sh@298 -- # mlx=() 00:17:14.675 21:32:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:14.675 21:32:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:14.675 21:32:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:14.675 21:32:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:14.675 21:32:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:14.675 21:32:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:14.675 21:32:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:14.675 21:32:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:14.675 21:32:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:14.675 21:32:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:14.675 21:32:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:14.675 21:32:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:14.675 21:32:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:14.675 21:32:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:14.675 21:32:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:14.675 21:32:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:14.675 21:32:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:14.675 21:32:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:14.675 21:32:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:14.675 21:32:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:14.675 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:14.675 21:32:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:14.675 21:32:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:14.675 21:32:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.675 21:32:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.675 21:32:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:14.675 21:32:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:14.675 21:32:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:14.675 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:14.675 21:32:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:14.675 21:32:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:14.675 21:32:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.675 21:32:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.675 21:32:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:14.675 21:32:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:14.675 21:32:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:14.675 21:32:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:14.675 21:32:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:14.675 21:32:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.675 21:32:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:14.675 21:32:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.675 21:32:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:14.675 Found net devices under 0000:af:00.0: cvl_0_0 00:17:14.675 21:32:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.675 21:32:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:14.675 21:32:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.675 21:32:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:14.675 21:32:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.675 21:32:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:14.675 Found net devices under 0000:af:00.1: cvl_0_1 00:17:14.675 21:32:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.675 21:32:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:14.675 21:32:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:14.675 21:32:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:14.675 21:32:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:14.675 21:32:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:14.675 21:32:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.675 21:32:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:14.675 21:32:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:14.675 21:32:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:14.675 21:32:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:14.675 21:32:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:14.675 21:32:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:14.675 21:32:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:14.675 21:32:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.675 21:32:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:14.675 21:32:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:14.675 21:32:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:14.675 21:32:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:14.675 21:32:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:14.675 21:32:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:14.675 21:32:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:14.675 21:32:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:14.675 21:32:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:14.675 21:32:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:14.675 21:32:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:14.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:17:14.675 00:17:14.675 --- 10.0.0.2 ping statistics --- 00:17:14.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.675 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:17:14.675 21:32:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:14.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:17:14.675 00:17:14.675 --- 10.0.0.1 ping statistics --- 00:17:14.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.675 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:17:14.675 21:32:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.675 21:32:37 -- nvmf/common.sh@411 -- # return 0 00:17:14.675 21:32:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:14.675 21:32:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.676 21:32:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:14.676 21:32:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:14.676 21:32:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.676 21:32:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:14.676 21:32:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:14.676 21:32:37 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:14.676 21:32:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:14.676 21:32:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:14.676 21:32:37 -- common/autotest_common.sh@10 -- # set +x 00:17:14.676 21:32:37 -- nvmf/common.sh@470 -- # nvmfpid=2865640 00:17:14.676 21:32:37 -- nvmf/common.sh@471 -- # waitforlisten 2865640 00:17:14.676 21:32:37 -- common/autotest_common.sh@817 -- # '[' -z 2865640 ']' 00:17:14.676 21:32:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.676 21:32:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:14.676 21:32:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.676 21:32:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:14.676 21:32:37 -- common/autotest_common.sh@10 -- # set +x 00:17:14.676 21:32:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:14.935 [2024-04-24 21:32:37.603111] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:17:14.935 [2024-04-24 21:32:37.603157] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:14.935 [2024-04-24 21:32:37.682370] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:14.935 [2024-04-24 21:32:37.778063] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.935 [2024-04-24 21:32:37.778099] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.935 [2024-04-24 21:32:37.778108] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.935 [2024-04-24 21:32:37.778116] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.935 [2024-04-24 21:32:37.778122] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.935 [2024-04-24 21:32:37.778247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:14.935 [2024-04-24 21:32:37.778355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:14.935 [2024-04-24 21:32:37.778481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:14.935 [2024-04-24 21:32:37.778492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:15.874 21:32:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:15.874 21:32:38 -- common/autotest_common.sh@850 -- # return 0 00:17:15.874 21:32:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:15.874 21:32:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:15.874 21:32:38 -- common/autotest_common.sh@10 -- # set +x 00:17:15.874 21:32:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.874 21:32:38 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:15.874 21:32:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.874 21:32:38 -- common/autotest_common.sh@10 -- # set +x 00:17:15.874 [2024-04-24 21:32:38.462816] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.874 21:32:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.874 21:32:38 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:15.874 21:32:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.874 21:32:38 -- common/autotest_common.sh@10 -- # set +x 00:17:15.874 Malloc0 00:17:15.874 21:32:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.874 21:32:38 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:15.874 21:32:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.874 21:32:38 -- common/autotest_common.sh@10 -- # set +x 00:17:15.874 21:32:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.874 21:32:38 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:15.874 21:32:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.874 21:32:38 -- common/autotest_common.sh@10 -- # set +x 00:17:15.874 21:32:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.874 21:32:38 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.874 21:32:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.874 21:32:38 -- common/autotest_common.sh@10 -- # set +x 00:17:15.874 [2024-04-24 21:32:38.507685] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.874 21:32:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.874 21:32:38 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:15.874 21:32:38 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:15.874 21:32:38 -- nvmf/common.sh@521 -- # config=() 00:17:15.874 21:32:38 -- nvmf/common.sh@521 -- # local subsystem config 00:17:15.874 21:32:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:15.874 21:32:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:15.874 { 00:17:15.874 "params": { 00:17:15.874 "name": "Nvme$subsystem", 00:17:15.874 "trtype": "$TEST_TRANSPORT", 00:17:15.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:15.874 "adrfam": "ipv4", 00:17:15.874 "trsvcid": "$NVMF_PORT", 00:17:15.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:15.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:15.874 "hdgst": ${hdgst:-false}, 00:17:15.874 "ddgst": ${ddgst:-false} 00:17:15.874 }, 00:17:15.874 "method": "bdev_nvme_attach_controller" 00:17:15.874 } 00:17:15.874 EOF 00:17:15.874 )") 00:17:15.874 21:32:38 -- nvmf/common.sh@543 -- # cat 00:17:15.874 21:32:38 -- nvmf/common.sh@545 -- # jq . 00:17:15.874 21:32:38 -- nvmf/common.sh@546 -- # IFS=, 00:17:15.874 21:32:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:15.874 "params": { 00:17:15.875 "name": "Nvme1", 00:17:15.875 "trtype": "tcp", 00:17:15.875 "traddr": "10.0.0.2", 00:17:15.875 "adrfam": "ipv4", 00:17:15.875 "trsvcid": "4420", 00:17:15.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.875 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:15.875 "hdgst": false, 00:17:15.875 "ddgst": false 00:17:15.875 }, 00:17:15.875 "method": "bdev_nvme_attach_controller" 00:17:15.875 }' 00:17:15.875 [2024-04-24 21:32:38.540806] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:17:15.875 [2024-04-24 21:32:38.540855] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2865812 ] 00:17:15.875 [2024-04-24 21:32:38.615646] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:15.875 [2024-04-24 21:32:38.716640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.875 [2024-04-24 21:32:38.716734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.875 [2024-04-24 21:32:38.716737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.134 I/O targets: 00:17:16.134 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:16.134 00:17:16.134 00:17:16.134 CUnit - A unit testing framework for C - Version 2.1-3 00:17:16.134 http://cunit.sourceforge.net/ 00:17:16.134 00:17:16.134 00:17:16.134 Suite: bdevio tests on: Nvme1n1 00:17:16.134 Test: blockdev write read block ...passed 00:17:16.134 Test: blockdev write zeroes read block ...passed 00:17:16.134 Test: blockdev write zeroes read no split ...passed 00:17:16.394 Test: blockdev write zeroes read split ...passed 00:17:16.394 Test: blockdev write zeroes read split partial ...passed 00:17:16.394 Test: blockdev reset ...[2024-04-24 21:32:39.132843] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:16.394 [2024-04-24 21:32:39.132905] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13acbe0 (9): Bad file descriptor 00:17:16.394 [2024-04-24 21:32:39.235972] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:16.394 passed 00:17:16.394 Test: blockdev write read 8 blocks ...passed 00:17:16.654 Test: blockdev write read size > 128k ...passed 00:17:16.654 Test: blockdev write read invalid size ...passed 00:17:16.654 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:16.654 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:16.654 Test: blockdev write read max offset ...passed 00:17:16.654 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:16.654 Test: blockdev writev readv 8 blocks ...passed 00:17:16.654 Test: blockdev writev readv 30 x 1block ...passed 00:17:16.654 Test: blockdev writev readv block ...passed 00:17:16.654 Test: blockdev writev readv size > 128k ...passed 00:17:16.654 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:16.654 Test: blockdev comparev and writev ...[2024-04-24 21:32:39.513952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:16.654 [2024-04-24 21:32:39.513980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.654 [2024-04-24 21:32:39.513996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:16.654 [2024-04-24 21:32:39.514007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:16.654 [2024-04-24 21:32:39.514485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:16.654 [2024-04-24 21:32:39.514505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:16.654 [2024-04-24 21:32:39.514519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:16.654 [2024-04-24 21:32:39.514529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:16.654 [2024-04-24 21:32:39.515011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:16.654 [2024-04-24 21:32:39.515023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:16.654 [2024-04-24 21:32:39.515037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:16.654 [2024-04-24 21:32:39.515046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:16.654 [2024-04-24 21:32:39.515498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:16.654 [2024-04-24 21:32:39.515511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:16.654 [2024-04-24 21:32:39.515525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:16.654 [2024-04-24 21:32:39.515534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:16.913 passed 00:17:16.913 Test: blockdev nvme passthru rw ...passed 00:17:16.913 Test: blockdev nvme passthru vendor specific ...[2024-04-24 21:32:39.599328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:16.913 [2024-04-24 21:32:39.599344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:16.913 [2024-04-24 21:32:39.599692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:16.913 [2024-04-24 21:32:39.599704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:16.913 [2024-04-24 21:32:39.600097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:16.913 [2024-04-24 21:32:39.600108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:16.913 [2024-04-24 21:32:39.600448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:16.913 [2024-04-24 21:32:39.600465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:16.913 passed 00:17:16.913 Test: blockdev nvme admin passthru ...passed 00:17:16.913 Test: blockdev copy ...passed 00:17:16.913 00:17:16.913 Run Summary: Type Total Ran Passed Failed Inactive 00:17:16.913 suites 1 1 n/a 0 0 00:17:16.913 tests 23 23 23 0 0 00:17:16.913 asserts 152 152 152 0 n/a 00:17:16.913 00:17:16.913 Elapsed time = 1.556 seconds 00:17:17.173 21:32:39 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:17.173 21:32:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.173 21:32:39 -- common/autotest_common.sh@10 -- # set +x 00:17:17.173 21:32:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.173 21:32:40 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:17.173 21:32:40 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:17.173 21:32:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:17.173 21:32:40 -- nvmf/common.sh@117 -- # sync 00:17:17.173 21:32:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:17.173 21:32:40 -- nvmf/common.sh@120 -- # set +e 00:17:17.173 21:32:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:17.173 21:32:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:17.173 rmmod nvme_tcp 00:17:17.173 rmmod nvme_fabrics 00:17:17.173 rmmod nvme_keyring 00:17:17.433 21:32:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:17.433 21:32:40 -- nvmf/common.sh@124 -- # set -e 00:17:17.433 21:32:40 -- nvmf/common.sh@125 -- # return 0 00:17:17.433 21:32:40 -- nvmf/common.sh@478 -- # '[' -n 2865640 ']' 00:17:17.433 21:32:40 -- nvmf/common.sh@479 -- # killprocess 2865640 00:17:17.433 21:32:40 -- common/autotest_common.sh@936 -- # '[' -z 2865640 ']' 00:17:17.433 21:32:40 -- common/autotest_common.sh@940 -- # kill -0 2865640 00:17:17.433 21:32:40 -- common/autotest_common.sh@941 -- # uname 00:17:17.433 21:32:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:17.433 21:32:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2865640 00:17:17.433 21:32:40 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:17.433 21:32:40 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:17.433 21:32:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2865640' 00:17:17.433 killing process with pid 2865640 00:17:17.433 21:32:40 -- common/autotest_common.sh@955 -- # kill 2865640 00:17:17.433 21:32:40 -- common/autotest_common.sh@960 -- # wait 2865640 00:17:17.693 21:32:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:17.693 21:32:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:17.693 21:32:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:17.693 21:32:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:17.693 21:32:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:17.693 21:32:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.693 21:32:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.693 21:32:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.232 21:32:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:20.232 00:17:20.232 real 0m11.902s 00:17:20.232 user 0m14.620s 00:17:20.232 sys 0m6.361s 00:17:20.232 21:32:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:20.232 21:32:42 -- common/autotest_common.sh@10 -- # set +x 00:17:20.232 ************************************ 00:17:20.232 END TEST nvmf_bdevio_no_huge 00:17:20.232 ************************************ 00:17:20.232 21:32:42 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:20.232 21:32:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:20.232 21:32:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:20.232 21:32:42 -- common/autotest_common.sh@10 -- # set +x 00:17:20.232 ************************************ 00:17:20.232 START TEST nvmf_tls 00:17:20.232 ************************************ 00:17:20.232 21:32:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:20.232 * Looking for test storage... 00:17:20.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:20.232 21:32:42 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:20.232 21:32:42 -- nvmf/common.sh@7 -- # uname -s 00:17:20.232 21:32:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.232 21:32:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.232 21:32:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.232 21:32:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.232 21:32:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.232 21:32:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.232 21:32:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.232 21:32:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.232 21:32:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.232 21:32:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.232 21:32:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:20.232 21:32:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:20.232 21:32:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.232 21:32:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.232 21:32:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:20.232 21:32:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.232 21:32:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:20.232 21:32:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.232 21:32:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.232 21:32:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.233 21:32:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.233 21:32:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.233 21:32:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.233 21:32:42 -- paths/export.sh@5 -- # export PATH 00:17:20.233 21:32:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.233 21:32:42 -- nvmf/common.sh@47 -- # : 0 00:17:20.233 21:32:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:20.233 21:32:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:20.233 21:32:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.233 21:32:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.233 21:32:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.233 21:32:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:20.233 21:32:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:20.233 21:32:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:20.233 21:32:42 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:20.233 21:32:42 -- target/tls.sh@62 -- # nvmftestinit 00:17:20.233 21:32:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:20.233 21:32:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.233 21:32:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:20.233 21:32:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:20.233 21:32:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:20.233 21:32:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.233 21:32:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.233 21:32:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.233 21:32:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:20.233 21:32:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:20.233 21:32:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:20.233 21:32:42 -- common/autotest_common.sh@10 -- # set +x 00:17:26.808 21:32:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:26.808 21:32:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:26.808 21:32:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:26.808 21:32:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:26.808 21:32:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:26.808 21:32:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:26.808 21:32:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:26.808 21:32:49 -- nvmf/common.sh@295 -- # net_devs=() 00:17:26.808 21:32:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:26.808 21:32:49 -- nvmf/common.sh@296 -- # e810=() 00:17:26.808 21:32:49 -- nvmf/common.sh@296 -- # local -ga e810 00:17:26.808 21:32:49 -- nvmf/common.sh@297 -- # x722=() 00:17:26.808 21:32:49 -- nvmf/common.sh@297 -- # local -ga x722 00:17:26.808 21:32:49 -- nvmf/common.sh@298 -- # mlx=() 00:17:26.808 21:32:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:26.808 21:32:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.808 21:32:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.808 21:32:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.808 21:32:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.808 21:32:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.808 21:32:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.808 21:32:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.808 21:32:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.808 21:32:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.808 21:32:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.808 21:32:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.808 21:32:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:26.808 21:32:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:26.808 21:32:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:26.808 21:32:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:26.808 21:32:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:26.808 21:32:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:26.808 21:32:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.808 21:32:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:26.808 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:26.808 21:32:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.808 21:32:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.808 21:32:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.808 21:32:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.808 21:32:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.808 21:32:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.808 21:32:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:26.808 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:26.808 21:32:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:26.808 21:32:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:26.808 21:32:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.808 21:32:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.808 21:32:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:26.808 21:32:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:26.808 21:32:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:26.808 21:32:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:26.808 21:32:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.808 21:32:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.808 21:32:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:26.808 21:32:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.808 21:32:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:26.808 Found net devices under 0000:af:00.0: cvl_0_0 00:17:26.808 21:32:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.808 21:32:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.808 21:32:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.808 21:32:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:26.808 21:32:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.808 21:32:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:26.809 Found net devices under 0000:af:00.1: cvl_0_1 00:17:26.809 21:32:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.809 21:32:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:26.809 21:32:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:26.809 21:32:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:26.809 21:32:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:26.809 21:32:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:26.809 21:32:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.809 21:32:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.809 21:32:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.809 21:32:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:26.809 21:32:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:26.809 21:32:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:26.809 21:32:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:26.809 21:32:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:26.809 21:32:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.809 21:32:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:26.809 21:32:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:26.809 21:32:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:26.809 21:32:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:26.809 21:32:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:26.809 21:32:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:26.809 21:32:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:26.809 21:32:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:26.809 21:32:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:26.809 21:32:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:26.809 21:32:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:26.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:17:26.809 00:17:26.809 --- 10.0.0.2 ping statistics --- 00:17:26.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.809 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:17:26.809 21:32:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:26.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:17:26.809 00:17:26.809 --- 10.0.0.1 ping statistics --- 00:17:26.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.809 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:17:26.809 21:32:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.809 21:32:49 -- nvmf/common.sh@411 -- # return 0 00:17:26.809 21:32:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:26.809 21:32:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.809 21:32:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:26.809 21:32:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:26.809 21:32:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.809 21:32:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:26.809 21:32:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:26.809 21:32:49 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:26.809 21:32:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:26.809 21:32:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:26.809 21:32:49 -- common/autotest_common.sh@10 -- # set +x 00:17:26.809 21:32:49 -- nvmf/common.sh@470 -- # nvmfpid=2869857 00:17:26.809 21:32:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:26.809 21:32:49 -- nvmf/common.sh@471 -- # waitforlisten 2869857 00:17:26.809 21:32:49 -- common/autotest_common.sh@817 -- # '[' -z 2869857 ']' 00:17:26.809 21:32:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.809 21:32:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:26.809 21:32:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.809 21:32:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:26.809 21:32:49 -- common/autotest_common.sh@10 -- # set +x 00:17:27.073 [2024-04-24 21:32:49.704654] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:17:27.073 [2024-04-24 21:32:49.704704] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.073 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.073 [2024-04-24 21:32:49.781890] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.073 [2024-04-24 21:32:49.853280] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.073 [2024-04-24 21:32:49.853316] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.073 [2024-04-24 21:32:49.853325] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.073 [2024-04-24 21:32:49.853334] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.073 [2024-04-24 21:32:49.853341] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.073 [2024-04-24 21:32:49.853362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.641 21:32:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:27.641 21:32:50 -- common/autotest_common.sh@850 -- # return 0 00:17:27.641 21:32:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:27.641 21:32:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:27.641 21:32:50 -- common/autotest_common.sh@10 -- # set +x 00:17:27.903 21:32:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.903 21:32:50 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:27.903 21:32:50 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:27.903 true 00:17:27.903 21:32:50 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:27.903 21:32:50 -- target/tls.sh@73 -- # jq -r .tls_version 00:17:28.167 21:32:50 -- target/tls.sh@73 -- # version=0 00:17:28.167 21:32:50 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:28.167 21:32:50 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:28.427 21:32:51 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:28.427 21:32:51 -- target/tls.sh@81 -- # jq -r .tls_version 00:17:28.427 21:32:51 -- target/tls.sh@81 -- # version=13 00:17:28.427 21:32:51 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:28.427 21:32:51 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:28.686 21:32:51 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:28.686 21:32:51 -- target/tls.sh@89 -- # jq -r .tls_version 00:17:28.686 21:32:51 -- target/tls.sh@89 -- # version=7 00:17:28.686 21:32:51 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:28.686 21:32:51 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:28.686 21:32:51 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:28.946 21:32:51 -- target/tls.sh@96 -- # ktls=false 00:17:28.946 21:32:51 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:28.946 21:32:51 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:29.204 21:32:51 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:29.204 21:32:51 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:29.204 21:32:52 -- target/tls.sh@104 -- # ktls=true 00:17:29.204 21:32:52 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:29.204 21:32:52 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:29.464 21:32:52 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:29.464 21:32:52 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:29.723 21:32:52 -- target/tls.sh@112 -- # ktls=false 00:17:29.723 21:32:52 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:29.723 21:32:52 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:29.723 21:32:52 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:29.723 21:32:52 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:29.723 21:32:52 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:29.723 21:32:52 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:17:29.723 21:32:52 -- nvmf/common.sh@693 -- # digest=1 00:17:29.723 21:32:52 -- nvmf/common.sh@694 -- # python - 00:17:29.723 21:32:52 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:29.723 21:32:52 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:29.723 21:32:52 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:29.723 21:32:52 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:29.723 21:32:52 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:29.723 21:32:52 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:17:29.723 21:32:52 -- nvmf/common.sh@693 -- # digest=1 00:17:29.723 21:32:52 -- nvmf/common.sh@694 -- # python - 00:17:29.723 21:32:52 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:29.723 21:32:52 -- target/tls.sh@121 -- # mktemp 00:17:29.723 21:32:52 -- target/tls.sh@121 -- # key_path=/tmp/tmp.a5Ur20dpC0 00:17:29.723 21:32:52 -- target/tls.sh@122 -- # mktemp 00:17:29.723 21:32:52 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.3BXhS9CNRs 00:17:29.723 21:32:52 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:29.723 21:32:52 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:29.723 21:32:52 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.a5Ur20dpC0 00:17:29.723 21:32:52 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.3BXhS9CNRs 00:17:29.723 21:32:52 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:29.983 21:32:52 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:30.242 21:32:52 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.a5Ur20dpC0 00:17:30.242 21:32:52 -- target/tls.sh@49 -- # local key=/tmp/tmp.a5Ur20dpC0 00:17:30.242 21:32:52 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:30.242 [2024-04-24 21:32:53.077120] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.242 21:32:53 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:30.502 21:32:53 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:30.761 [2024-04-24 21:32:53.389915] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:30.761 [2024-04-24 21:32:53.390118] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.761 21:32:53 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:30.761 malloc0 00:17:30.761 21:32:53 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:31.020 21:32:53 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.a5Ur20dpC0 00:17:31.020 [2024-04-24 21:32:53.879526] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:31.020 21:32:53 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.a5Ur20dpC0 00:17:31.280 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.270 Initializing NVMe Controllers 00:17:41.270 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:41.270 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:41.270 Initialization complete. Launching workers. 00:17:41.270 ======================================================== 00:17:41.270 Latency(us) 00:17:41.270 Device Information : IOPS MiB/s Average min max 00:17:41.270 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16197.36 63.27 3951.73 829.99 6010.39 00:17:41.270 ======================================================== 00:17:41.270 Total : 16197.36 63.27 3951.73 829.99 6010.39 00:17:41.270 00:17:41.270 21:33:03 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.a5Ur20dpC0 00:17:41.270 21:33:03 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:41.270 21:33:03 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:41.270 21:33:03 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:41.270 21:33:03 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.a5Ur20dpC0' 00:17:41.270 21:33:03 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:41.270 21:33:03 -- target/tls.sh@28 -- # bdevperf_pid=2872463 00:17:41.270 21:33:03 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:41.270 21:33:03 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:41.270 21:33:03 -- target/tls.sh@31 -- # waitforlisten 2872463 /var/tmp/bdevperf.sock 00:17:41.270 21:33:03 -- common/autotest_common.sh@817 -- # '[' -z 2872463 ']' 00:17:41.270 21:33:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.270 21:33:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:41.270 21:33:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.270 21:33:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:41.270 21:33:03 -- common/autotest_common.sh@10 -- # set +x 00:17:41.270 [2024-04-24 21:33:04.044142] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:17:41.270 [2024-04-24 21:33:04.044195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2872463 ] 00:17:41.270 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.270 [2024-04-24 21:33:04.111255] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.529 [2024-04-24 21:33:04.185524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.098 21:33:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:42.098 21:33:04 -- common/autotest_common.sh@850 -- # return 0 00:17:42.098 21:33:04 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.a5Ur20dpC0 00:17:42.358 [2024-04-24 21:33:05.020458] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:42.358 [2024-04-24 21:33:05.020528] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:42.358 TLSTESTn1 00:17:42.358 21:33:05 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:42.358 Running I/O for 10 seconds... 00:17:54.581 00:17:54.581 Latency(us) 00:17:54.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.581 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:54.581 Verification LBA range: start 0x0 length 0x2000 00:17:54.581 TLSTESTn1 : 10.07 1579.43 6.17 0.00 0.00 80820.25 7235.17 121634.82 00:17:54.581 =================================================================================================================== 00:17:54.581 Total : 1579.43 6.17 0.00 0.00 80820.25 7235.17 121634.82 00:17:54.581 0 00:17:54.581 21:33:15 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:54.581 21:33:15 -- target/tls.sh@45 -- # killprocess 2872463 00:17:54.581 21:33:15 -- common/autotest_common.sh@936 -- # '[' -z 2872463 ']' 00:17:54.581 21:33:15 -- common/autotest_common.sh@940 -- # kill -0 2872463 00:17:54.581 21:33:15 -- common/autotest_common.sh@941 -- # uname 00:17:54.581 21:33:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:54.581 21:33:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2872463 00:17:54.581 21:33:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:54.581 21:33:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:54.581 21:33:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2872463' 00:17:54.581 killing process with pid 2872463 00:17:54.581 21:33:15 -- common/autotest_common.sh@955 -- # kill 2872463 00:17:54.581 Received shutdown signal, test time was about 10.000000 seconds 00:17:54.581 00:17:54.581 Latency(us) 00:17:54.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.581 =================================================================================================================== 00:17:54.581 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:54.581 [2024-04-24 21:33:15.387507] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:54.581 21:33:15 -- common/autotest_common.sh@960 -- # wait 2872463 00:17:54.581 21:33:15 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3BXhS9CNRs 00:17:54.581 21:33:15 -- common/autotest_common.sh@638 -- # local es=0 00:17:54.582 21:33:15 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3BXhS9CNRs 00:17:54.582 21:33:15 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:54.582 21:33:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:54.582 21:33:15 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:54.582 21:33:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:54.582 21:33:15 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3BXhS9CNRs 00:17:54.582 21:33:15 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:54.582 21:33:15 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:54.582 21:33:15 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:54.582 21:33:15 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3BXhS9CNRs' 00:17:54.582 21:33:15 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:54.582 21:33:15 -- target/tls.sh@28 -- # bdevperf_pid=2874763 00:17:54.582 21:33:15 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:54.582 21:33:15 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:54.582 21:33:15 -- target/tls.sh@31 -- # waitforlisten 2874763 /var/tmp/bdevperf.sock 00:17:54.582 21:33:15 -- common/autotest_common.sh@817 -- # '[' -z 2874763 ']' 00:17:54.582 21:33:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.582 21:33:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:54.582 21:33:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.582 21:33:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:54.582 21:33:15 -- common/autotest_common.sh@10 -- # set +x 00:17:54.582 [2024-04-24 21:33:15.636202] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:17:54.582 [2024-04-24 21:33:15.636254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2874763 ] 00:17:54.582 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.582 [2024-04-24 21:33:15.701120] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.582 [2024-04-24 21:33:15.767758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.582 21:33:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:54.582 21:33:16 -- common/autotest_common.sh@850 -- # return 0 00:17:54.582 21:33:16 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3BXhS9CNRs 00:17:54.582 [2024-04-24 21:33:16.566289] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:54.582 [2024-04-24 21:33:16.566359] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:54.582 [2024-04-24 21:33:16.572687] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:54.582 [2024-04-24 21:33:16.572833] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0a710 (107): Transport endpoint is not connected 00:17:54.582 [2024-04-24 21:33:16.573713] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0a710 (9): Bad file descriptor 00:17:54.582 [2024-04-24 21:33:16.574714] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:54.582 [2024-04-24 21:33:16.574726] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:54.582 [2024-04-24 21:33:16.574735] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:54.582 request: 00:17:54.582 { 00:17:54.582 "name": "TLSTEST", 00:17:54.582 "trtype": "tcp", 00:17:54.582 "traddr": "10.0.0.2", 00:17:54.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.582 "adrfam": "ipv4", 00:17:54.582 "trsvcid": "4420", 00:17:54.582 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.582 "psk": "/tmp/tmp.3BXhS9CNRs", 00:17:54.582 "method": "bdev_nvme_attach_controller", 00:17:54.582 "req_id": 1 00:17:54.582 } 00:17:54.582 Got JSON-RPC error response 00:17:54.582 response: 00:17:54.582 { 00:17:54.582 "code": -32602, 00:17:54.582 "message": "Invalid parameters" 00:17:54.582 } 00:17:54.582 21:33:16 -- target/tls.sh@36 -- # killprocess 2874763 00:17:54.582 21:33:16 -- common/autotest_common.sh@936 -- # '[' -z 2874763 ']' 00:17:54.582 21:33:16 -- common/autotest_common.sh@940 -- # kill -0 2874763 00:17:54.582 21:33:16 -- common/autotest_common.sh@941 -- # uname 00:17:54.582 21:33:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:54.582 21:33:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2874763 00:17:54.582 21:33:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:54.582 21:33:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:54.582 21:33:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2874763' 00:17:54.582 killing process with pid 2874763 00:17:54.582 21:33:16 -- common/autotest_common.sh@955 -- # kill 2874763 00:17:54.582 Received shutdown signal, test time was about 10.000000 seconds 00:17:54.582 00:17:54.582 Latency(us) 00:17:54.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.582 =================================================================================================================== 00:17:54.582 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:54.582 [2024-04-24 21:33:16.650938] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:54.582 21:33:16 -- common/autotest_common.sh@960 -- # wait 2874763 00:17:54.582 21:33:16 -- target/tls.sh@37 -- # return 1 00:17:54.582 21:33:16 -- common/autotest_common.sh@641 -- # es=1 00:17:54.582 21:33:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:54.582 21:33:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:54.582 21:33:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:54.582 21:33:16 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.a5Ur20dpC0 00:17:54.582 21:33:16 -- common/autotest_common.sh@638 -- # local es=0 00:17:54.582 21:33:16 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.a5Ur20dpC0 00:17:54.582 21:33:16 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:54.582 21:33:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:54.582 21:33:16 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:54.582 21:33:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:54.582 21:33:16 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.a5Ur20dpC0 00:17:54.582 21:33:16 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:54.582 21:33:16 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:54.582 21:33:16 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:54.582 21:33:16 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.a5Ur20dpC0' 00:17:54.582 21:33:16 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:54.582 21:33:16 -- target/tls.sh@28 -- # bdevperf_pid=2875039 00:17:54.582 21:33:16 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:54.582 21:33:16 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:54.582 21:33:16 -- target/tls.sh@31 -- # waitforlisten 2875039 /var/tmp/bdevperf.sock 00:17:54.582 21:33:16 -- common/autotest_common.sh@817 -- # '[' -z 2875039 ']' 00:17:54.582 21:33:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.582 21:33:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:54.582 21:33:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.582 21:33:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:54.582 21:33:16 -- common/autotest_common.sh@10 -- # set +x 00:17:54.582 [2024-04-24 21:33:16.892533] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:17:54.582 [2024-04-24 21:33:16.892584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2875039 ] 00:17:54.582 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.582 [2024-04-24 21:33:16.958637] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.582 [2024-04-24 21:33:17.033264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.842 21:33:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:54.842 21:33:17 -- common/autotest_common.sh@850 -- # return 0 00:17:54.842 21:33:17 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.a5Ur20dpC0 00:17:55.102 [2024-04-24 21:33:17.830578] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:55.102 [2024-04-24 21:33:17.830651] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:55.102 [2024-04-24 21:33:17.835795] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:55.102 [2024-04-24 21:33:17.835817] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:55.102 [2024-04-24 21:33:17.835843] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:55.102 [2024-04-24 21:33:17.837003] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a9710 (107): Transport endpoint is not connected 00:17:55.102 [2024-04-24 21:33:17.837995] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a9710 (9): Bad file descriptor 00:17:55.102 [2024-04-24 21:33:17.838996] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:55.102 [2024-04-24 21:33:17.839008] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:55.102 [2024-04-24 21:33:17.839018] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:55.102 request: 00:17:55.102 { 00:17:55.102 "name": "TLSTEST", 00:17:55.102 "trtype": "tcp", 00:17:55.102 "traddr": "10.0.0.2", 00:17:55.102 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:55.102 "adrfam": "ipv4", 00:17:55.102 "trsvcid": "4420", 00:17:55.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.102 "psk": "/tmp/tmp.a5Ur20dpC0", 00:17:55.102 "method": "bdev_nvme_attach_controller", 00:17:55.102 "req_id": 1 00:17:55.102 } 00:17:55.102 Got JSON-RPC error response 00:17:55.102 response: 00:17:55.102 { 00:17:55.102 "code": -32602, 00:17:55.102 "message": "Invalid parameters" 00:17:55.102 } 00:17:55.102 21:33:17 -- target/tls.sh@36 -- # killprocess 2875039 00:17:55.102 21:33:17 -- common/autotest_common.sh@936 -- # '[' -z 2875039 ']' 00:17:55.102 21:33:17 -- common/autotest_common.sh@940 -- # kill -0 2875039 00:17:55.102 21:33:17 -- common/autotest_common.sh@941 -- # uname 00:17:55.102 21:33:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:55.102 21:33:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2875039 00:17:55.102 21:33:17 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:55.102 21:33:17 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:55.102 21:33:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2875039' 00:17:55.102 killing process with pid 2875039 00:17:55.102 21:33:17 -- common/autotest_common.sh@955 -- # kill 2875039 00:17:55.102 Received shutdown signal, test time was about 10.000000 seconds 00:17:55.102 00:17:55.102 Latency(us) 00:17:55.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.102 =================================================================================================================== 00:17:55.102 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:55.102 [2024-04-24 21:33:17.911147] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:55.102 21:33:17 -- common/autotest_common.sh@960 -- # wait 2875039 00:17:55.362 21:33:18 -- target/tls.sh@37 -- # return 1 00:17:55.362 21:33:18 -- common/autotest_common.sh@641 -- # es=1 00:17:55.362 21:33:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:55.362 21:33:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:55.362 21:33:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:55.362 21:33:18 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.a5Ur20dpC0 00:17:55.362 21:33:18 -- common/autotest_common.sh@638 -- # local es=0 00:17:55.362 21:33:18 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.a5Ur20dpC0 00:17:55.362 21:33:18 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:55.362 21:33:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:55.362 21:33:18 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:55.362 21:33:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:55.362 21:33:18 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.a5Ur20dpC0 00:17:55.362 21:33:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:55.362 21:33:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:55.362 21:33:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:55.362 21:33:18 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.a5Ur20dpC0' 00:17:55.362 21:33:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:55.362 21:33:18 -- target/tls.sh@28 -- # bdevperf_pid=2875307 00:17:55.362 21:33:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:55.362 21:33:18 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:55.362 21:33:18 -- target/tls.sh@31 -- # waitforlisten 2875307 /var/tmp/bdevperf.sock 00:17:55.362 21:33:18 -- common/autotest_common.sh@817 -- # '[' -z 2875307 ']' 00:17:55.362 21:33:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.362 21:33:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:55.362 21:33:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.362 21:33:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:55.362 21:33:18 -- common/autotest_common.sh@10 -- # set +x 00:17:55.362 [2024-04-24 21:33:18.155328] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:17:55.362 [2024-04-24 21:33:18.155380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2875307 ] 00:17:55.362 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.362 [2024-04-24 21:33:18.220681] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.622 [2024-04-24 21:33:18.288189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.189 21:33:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:56.189 21:33:18 -- common/autotest_common.sh@850 -- # return 0 00:17:56.189 21:33:18 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.a5Ur20dpC0 00:17:56.450 [2024-04-24 21:33:19.078728] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:56.450 [2024-04-24 21:33:19.078800] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:56.450 [2024-04-24 21:33:19.089026] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:56.450 [2024-04-24 21:33:19.089047] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:56.450 [2024-04-24 21:33:19.089072] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:56.450 [2024-04-24 21:33:19.090296] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146b710 (107): Transport endpoint is not connected 00:17:56.450 [2024-04-24 21:33:19.091288] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146b710 (9): Bad file descriptor 00:17:56.450 [2024-04-24 21:33:19.092289] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:56.450 [2024-04-24 21:33:19.092301] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:56.450 [2024-04-24 21:33:19.092310] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:56.450 request: 00:17:56.450 { 00:17:56.450 "name": "TLSTEST", 00:17:56.450 "trtype": "tcp", 00:17:56.450 "traddr": "10.0.0.2", 00:17:56.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:56.450 "adrfam": "ipv4", 00:17:56.450 "trsvcid": "4420", 00:17:56.450 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:56.450 "psk": "/tmp/tmp.a5Ur20dpC0", 00:17:56.450 "method": "bdev_nvme_attach_controller", 00:17:56.450 "req_id": 1 00:17:56.450 } 00:17:56.450 Got JSON-RPC error response 00:17:56.450 response: 00:17:56.450 { 00:17:56.450 "code": -32602, 00:17:56.450 "message": "Invalid parameters" 00:17:56.450 } 00:17:56.450 21:33:19 -- target/tls.sh@36 -- # killprocess 2875307 00:17:56.450 21:33:19 -- common/autotest_common.sh@936 -- # '[' -z 2875307 ']' 00:17:56.450 21:33:19 -- common/autotest_common.sh@940 -- # kill -0 2875307 00:17:56.450 21:33:19 -- common/autotest_common.sh@941 -- # uname 00:17:56.450 21:33:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:56.450 21:33:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2875307 00:17:56.450 21:33:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:56.450 21:33:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:56.450 21:33:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2875307' 00:17:56.450 killing process with pid 2875307 00:17:56.450 21:33:19 -- common/autotest_common.sh@955 -- # kill 2875307 00:17:56.450 Received shutdown signal, test time was about 10.000000 seconds 00:17:56.450 00:17:56.450 Latency(us) 00:17:56.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.450 =================================================================================================================== 00:17:56.450 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:56.450 [2024-04-24 21:33:19.163353] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:56.450 21:33:19 -- common/autotest_common.sh@960 -- # wait 2875307 00:17:56.709 21:33:19 -- target/tls.sh@37 -- # return 1 00:17:56.709 21:33:19 -- common/autotest_common.sh@641 -- # es=1 00:17:56.709 21:33:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:56.709 21:33:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:56.709 21:33:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:56.709 21:33:19 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:56.709 21:33:19 -- common/autotest_common.sh@638 -- # local es=0 00:17:56.709 21:33:19 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:56.709 21:33:19 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:56.709 21:33:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:56.709 21:33:19 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:56.709 21:33:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:56.709 21:33:19 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:56.709 21:33:19 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:56.709 21:33:19 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:56.709 21:33:19 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:56.709 21:33:19 -- target/tls.sh@23 -- # psk= 00:17:56.709 21:33:19 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.709 21:33:19 -- target/tls.sh@28 -- # bdevperf_pid=2875449 00:17:56.709 21:33:19 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:56.709 21:33:19 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:56.709 21:33:19 -- target/tls.sh@31 -- # waitforlisten 2875449 /var/tmp/bdevperf.sock 00:17:56.709 21:33:19 -- common/autotest_common.sh@817 -- # '[' -z 2875449 ']' 00:17:56.709 21:33:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.709 21:33:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:56.709 21:33:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.709 21:33:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:56.709 21:33:19 -- common/autotest_common.sh@10 -- # set +x 00:17:56.709 [2024-04-24 21:33:19.403191] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:17:56.709 [2024-04-24 21:33:19.403242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2875449 ] 00:17:56.709 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.709 [2024-04-24 21:33:19.470209] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.709 [2024-04-24 21:33:19.544311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.645 21:33:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:57.645 21:33:20 -- common/autotest_common.sh@850 -- # return 0 00:17:57.645 21:33:20 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:57.645 [2024-04-24 21:33:20.372312] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:57.645 [2024-04-24 21:33:20.373736] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ddc0 (9): Bad file descriptor 00:17:57.645 [2024-04-24 21:33:20.374736] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:57.645 [2024-04-24 21:33:20.374748] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:57.645 [2024-04-24 21:33:20.374757] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:57.645 request: 00:17:57.645 { 00:17:57.645 "name": "TLSTEST", 00:17:57.645 "trtype": "tcp", 00:17:57.645 "traddr": "10.0.0.2", 00:17:57.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:57.645 "adrfam": "ipv4", 00:17:57.645 "trsvcid": "4420", 00:17:57.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.645 "method": "bdev_nvme_attach_controller", 00:17:57.645 "req_id": 1 00:17:57.645 } 00:17:57.645 Got JSON-RPC error response 00:17:57.645 response: 00:17:57.645 { 00:17:57.645 "code": -32602, 00:17:57.645 "message": "Invalid parameters" 00:17:57.645 } 00:17:57.645 21:33:20 -- target/tls.sh@36 -- # killprocess 2875449 00:17:57.645 21:33:20 -- common/autotest_common.sh@936 -- # '[' -z 2875449 ']' 00:17:57.645 21:33:20 -- common/autotest_common.sh@940 -- # kill -0 2875449 00:17:57.645 21:33:20 -- common/autotest_common.sh@941 -- # uname 00:17:57.645 21:33:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.645 21:33:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2875449 00:17:57.645 21:33:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:57.645 21:33:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:57.645 21:33:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2875449' 00:17:57.645 killing process with pid 2875449 00:17:57.645 21:33:20 -- common/autotest_common.sh@955 -- # kill 2875449 00:17:57.645 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.645 00:17:57.645 Latency(us) 00:17:57.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.645 =================================================================================================================== 00:17:57.645 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:57.645 21:33:20 -- common/autotest_common.sh@960 -- # wait 2875449 00:17:57.904 21:33:20 -- target/tls.sh@37 -- # return 1 00:17:57.904 21:33:20 -- common/autotest_common.sh@641 -- # es=1 00:17:57.904 21:33:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:57.904 21:33:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:57.904 21:33:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:57.904 21:33:20 -- target/tls.sh@158 -- # killprocess 2869857 00:17:57.904 21:33:20 -- common/autotest_common.sh@936 -- # '[' -z 2869857 ']' 00:17:57.904 21:33:20 -- common/autotest_common.sh@940 -- # kill -0 2869857 00:17:57.904 21:33:20 -- common/autotest_common.sh@941 -- # uname 00:17:57.904 21:33:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.904 21:33:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2869857 00:17:57.904 21:33:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:57.904 21:33:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:57.904 21:33:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2869857' 00:17:57.904 killing process with pid 2869857 00:17:57.904 21:33:20 -- common/autotest_common.sh@955 -- # kill 2869857 00:17:57.904 [2024-04-24 21:33:20.707646] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:57.904 21:33:20 -- common/autotest_common.sh@960 -- # wait 2869857 00:17:58.163 21:33:20 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:58.163 21:33:20 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:58.163 21:33:20 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:58.163 21:33:20 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:58.163 21:33:20 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:58.163 21:33:20 -- nvmf/common.sh@693 -- # digest=2 00:17:58.163 21:33:20 -- nvmf/common.sh@694 -- # python - 00:17:58.163 21:33:20 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:58.163 21:33:20 -- target/tls.sh@160 -- # mktemp 00:17:58.163 21:33:20 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.3qQaBjvwJ8 00:17:58.163 21:33:20 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:58.163 21:33:20 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.3qQaBjvwJ8 00:17:58.163 21:33:20 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:58.163 21:33:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:58.163 21:33:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:58.163 21:33:20 -- common/autotest_common.sh@10 -- # set +x 00:17:58.163 21:33:20 -- nvmf/common.sh@470 -- # nvmfpid=2875759 00:17:58.163 21:33:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:58.163 21:33:20 -- nvmf/common.sh@471 -- # waitforlisten 2875759 00:17:58.163 21:33:20 -- common/autotest_common.sh@817 -- # '[' -z 2875759 ']' 00:17:58.163 21:33:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.163 21:33:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:58.163 21:33:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.163 21:33:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:58.163 21:33:20 -- common/autotest_common.sh@10 -- # set +x 00:17:58.163 [2024-04-24 21:33:21.035370] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:17:58.163 [2024-04-24 21:33:21.035418] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.422 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.422 [2024-04-24 21:33:21.108585] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.422 [2024-04-24 21:33:21.180248] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.422 [2024-04-24 21:33:21.180283] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.422 [2024-04-24 21:33:21.180293] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.422 [2024-04-24 21:33:21.180301] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.422 [2024-04-24 21:33:21.180324] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.422 [2024-04-24 21:33:21.180345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.988 21:33:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:58.988 21:33:21 -- common/autotest_common.sh@850 -- # return 0 00:17:58.988 21:33:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:58.988 21:33:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:58.988 21:33:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.988 21:33:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.988 21:33:21 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.3qQaBjvwJ8 00:17:58.988 21:33:21 -- target/tls.sh@49 -- # local key=/tmp/tmp.3qQaBjvwJ8 00:17:58.988 21:33:21 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:59.247 [2024-04-24 21:33:22.022309] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.247 21:33:22 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:59.505 21:33:22 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:59.505 [2024-04-24 21:33:22.351140] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:59.505 [2024-04-24 21:33:22.351333] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.505 21:33:22 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:59.764 malloc0 00:17:59.764 21:33:22 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:00.023 21:33:22 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3qQaBjvwJ8 00:18:00.023 [2024-04-24 21:33:22.840658] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:00.023 21:33:22 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3qQaBjvwJ8 00:18:00.023 21:33:22 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:00.023 21:33:22 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:00.023 21:33:22 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:00.023 21:33:22 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3qQaBjvwJ8' 00:18:00.023 21:33:22 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:00.023 21:33:22 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:00.023 21:33:22 -- target/tls.sh@28 -- # bdevperf_pid=2876161 00:18:00.023 21:33:22 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:00.023 21:33:22 -- target/tls.sh@31 -- # waitforlisten 2876161 /var/tmp/bdevperf.sock 00:18:00.023 21:33:22 -- common/autotest_common.sh@817 -- # '[' -z 2876161 ']' 00:18:00.023 21:33:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:00.023 21:33:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:00.023 21:33:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:00.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:00.023 21:33:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:00.023 21:33:22 -- common/autotest_common.sh@10 -- # set +x 00:18:00.023 [2024-04-24 21:33:22.892305] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:18:00.023 [2024-04-24 21:33:22.892357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2876161 ] 00:18:00.282 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.282 [2024-04-24 21:33:22.959829] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.283 [2024-04-24 21:33:23.033136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.851 21:33:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:00.851 21:33:23 -- common/autotest_common.sh@850 -- # return 0 00:18:00.851 21:33:23 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3qQaBjvwJ8 00:18:01.109 [2024-04-24 21:33:23.834722] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:01.110 [2024-04-24 21:33:23.834799] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:01.110 TLSTESTn1 00:18:01.110 21:33:23 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:01.404 Running I/O for 10 seconds... 00:18:11.387 00:18:11.387 Latency(us) 00:18:11.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.387 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:11.387 Verification LBA range: start 0x0 length 0x2000 00:18:11.387 TLSTESTn1 : 10.07 1527.58 5.97 0.00 0.00 83534.11 5531.24 119118.23 00:18:11.387 =================================================================================================================== 00:18:11.387 Total : 1527.58 5.97 0.00 0.00 83534.11 5531.24 119118.23 00:18:11.387 0 00:18:11.387 21:33:34 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:11.387 21:33:34 -- target/tls.sh@45 -- # killprocess 2876161 00:18:11.387 21:33:34 -- common/autotest_common.sh@936 -- # '[' -z 2876161 ']' 00:18:11.387 21:33:34 -- common/autotest_common.sh@940 -- # kill -0 2876161 00:18:11.387 21:33:34 -- common/autotest_common.sh@941 -- # uname 00:18:11.387 21:33:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:11.387 21:33:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2876161 00:18:11.387 21:33:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:11.387 21:33:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:11.387 21:33:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2876161' 00:18:11.387 killing process with pid 2876161 00:18:11.387 21:33:34 -- common/autotest_common.sh@955 -- # kill 2876161 00:18:11.387 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.387 00:18:11.387 Latency(us) 00:18:11.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.387 =================================================================================================================== 00:18:11.387 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.387 [2024-04-24 21:33:34.204647] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:11.387 21:33:34 -- common/autotest_common.sh@960 -- # wait 2876161 00:18:11.646 21:33:34 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.3qQaBjvwJ8 00:18:11.646 21:33:34 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3qQaBjvwJ8 00:18:11.646 21:33:34 -- common/autotest_common.sh@638 -- # local es=0 00:18:11.646 21:33:34 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3qQaBjvwJ8 00:18:11.646 21:33:34 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:11.646 21:33:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:11.646 21:33:34 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:11.646 21:33:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:11.646 21:33:34 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3qQaBjvwJ8 00:18:11.646 21:33:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:11.646 21:33:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:11.646 21:33:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:11.646 21:33:34 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3qQaBjvwJ8' 00:18:11.646 21:33:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:11.646 21:33:34 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:11.646 21:33:34 -- target/tls.sh@28 -- # bdevperf_pid=2878032 00:18:11.646 21:33:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:11.646 21:33:34 -- target/tls.sh@31 -- # waitforlisten 2878032 /var/tmp/bdevperf.sock 00:18:11.646 21:33:34 -- common/autotest_common.sh@817 -- # '[' -z 2878032 ']' 00:18:11.646 21:33:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.646 21:33:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:11.646 21:33:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.646 21:33:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:11.646 21:33:34 -- common/autotest_common.sh@10 -- # set +x 00:18:11.647 [2024-04-24 21:33:34.451081] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:18:11.647 [2024-04-24 21:33:34.451137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2878032 ] 00:18:11.647 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.647 [2024-04-24 21:33:34.518347] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.906 [2024-04-24 21:33:34.592393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.474 21:33:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:12.474 21:33:35 -- common/autotest_common.sh@850 -- # return 0 00:18:12.474 21:33:35 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3qQaBjvwJ8 00:18:12.734 [2024-04-24 21:33:35.406945] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:12.734 [2024-04-24 21:33:35.406995] bdev_nvme.c:6067:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:12.734 [2024-04-24 21:33:35.407004] bdev_nvme.c:6176:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.3qQaBjvwJ8 00:18:12.734 request: 00:18:12.734 { 00:18:12.734 "name": "TLSTEST", 00:18:12.734 "trtype": "tcp", 00:18:12.734 "traddr": "10.0.0.2", 00:18:12.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:12.734 "adrfam": "ipv4", 00:18:12.734 "trsvcid": "4420", 00:18:12.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.734 "psk": "/tmp/tmp.3qQaBjvwJ8", 00:18:12.734 "method": "bdev_nvme_attach_controller", 00:18:12.734 "req_id": 1 00:18:12.734 } 00:18:12.734 Got JSON-RPC error response 00:18:12.734 response: 00:18:12.734 { 00:18:12.734 "code": -1, 00:18:12.734 "message": "Operation not permitted" 00:18:12.734 } 00:18:12.734 21:33:35 -- target/tls.sh@36 -- # killprocess 2878032 00:18:12.734 21:33:35 -- common/autotest_common.sh@936 -- # '[' -z 2878032 ']' 00:18:12.734 21:33:35 -- common/autotest_common.sh@940 -- # kill -0 2878032 00:18:12.734 21:33:35 -- common/autotest_common.sh@941 -- # uname 00:18:12.734 21:33:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:12.734 21:33:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2878032 00:18:12.734 21:33:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:12.734 21:33:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:12.734 21:33:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2878032' 00:18:12.734 killing process with pid 2878032 00:18:12.734 21:33:35 -- common/autotest_common.sh@955 -- # kill 2878032 00:18:12.734 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.734 00:18:12.734 Latency(us) 00:18:12.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.734 =================================================================================================================== 00:18:12.734 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:12.734 21:33:35 -- common/autotest_common.sh@960 -- # wait 2878032 00:18:12.994 21:33:35 -- target/tls.sh@37 -- # return 1 00:18:12.994 21:33:35 -- common/autotest_common.sh@641 -- # es=1 00:18:12.994 21:33:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:12.994 21:33:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:12.994 21:33:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:12.994 21:33:35 -- target/tls.sh@174 -- # killprocess 2875759 00:18:12.994 21:33:35 -- common/autotest_common.sh@936 -- # '[' -z 2875759 ']' 00:18:12.994 21:33:35 -- common/autotest_common.sh@940 -- # kill -0 2875759 00:18:12.994 21:33:35 -- common/autotest_common.sh@941 -- # uname 00:18:12.994 21:33:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:12.994 21:33:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2875759 00:18:12.994 21:33:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:12.994 21:33:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:12.994 21:33:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2875759' 00:18:12.994 killing process with pid 2875759 00:18:12.994 21:33:35 -- common/autotest_common.sh@955 -- # kill 2875759 00:18:12.994 [2024-04-24 21:33:35.727247] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:12.994 21:33:35 -- common/autotest_common.sh@960 -- # wait 2875759 00:18:13.254 21:33:35 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:13.254 21:33:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:13.254 21:33:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:13.254 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:18:13.254 21:33:35 -- nvmf/common.sh@470 -- # nvmfpid=2878313 00:18:13.254 21:33:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:13.254 21:33:35 -- nvmf/common.sh@471 -- # waitforlisten 2878313 00:18:13.254 21:33:35 -- common/autotest_common.sh@817 -- # '[' -z 2878313 ']' 00:18:13.254 21:33:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.254 21:33:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:13.254 21:33:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.254 21:33:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:13.254 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:18:13.254 [2024-04-24 21:33:35.975273] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:18:13.254 [2024-04-24 21:33:35.975326] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.254 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.254 [2024-04-24 21:33:36.039659] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.254 [2024-04-24 21:33:36.108219] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.254 [2024-04-24 21:33:36.108250] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.254 [2024-04-24 21:33:36.108260] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.254 [2024-04-24 21:33:36.108269] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.254 [2024-04-24 21:33:36.108276] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.254 [2024-04-24 21:33:36.108296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.192 21:33:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:14.192 21:33:36 -- common/autotest_common.sh@850 -- # return 0 00:18:14.192 21:33:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:14.192 21:33:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:14.192 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:18:14.192 21:33:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.192 21:33:36 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.3qQaBjvwJ8 00:18:14.192 21:33:36 -- common/autotest_common.sh@638 -- # local es=0 00:18:14.192 21:33:36 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.3qQaBjvwJ8 00:18:14.192 21:33:36 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:18:14.192 21:33:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:14.192 21:33:36 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:18:14.192 21:33:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:14.192 21:33:36 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.3qQaBjvwJ8 00:18:14.192 21:33:36 -- target/tls.sh@49 -- # local key=/tmp/tmp.3qQaBjvwJ8 00:18:14.192 21:33:36 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:14.192 [2024-04-24 21:33:36.978360] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.192 21:33:36 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:14.451 21:33:37 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:14.451 [2024-04-24 21:33:37.307178] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:14.451 [2024-04-24 21:33:37.307371] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.451 21:33:37 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:14.709 malloc0 00:18:14.709 21:33:37 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:14.968 21:33:37 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3qQaBjvwJ8 00:18:14.968 [2024-04-24 21:33:37.788546] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:14.968 [2024-04-24 21:33:37.788567] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:14.968 [2024-04-24 21:33:37.788587] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:18:14.968 request: 00:18:14.968 { 00:18:14.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.968 "host": "nqn.2016-06.io.spdk:host1", 00:18:14.968 "psk": "/tmp/tmp.3qQaBjvwJ8", 00:18:14.968 "method": "nvmf_subsystem_add_host", 00:18:14.968 "req_id": 1 00:18:14.968 } 00:18:14.968 Got JSON-RPC error response 00:18:14.968 response: 00:18:14.968 { 00:18:14.968 "code": -32603, 00:18:14.968 "message": "Internal error" 00:18:14.968 } 00:18:14.968 21:33:37 -- common/autotest_common.sh@641 -- # es=1 00:18:14.968 21:33:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:14.968 21:33:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:14.968 21:33:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:14.968 21:33:37 -- target/tls.sh@180 -- # killprocess 2878313 00:18:14.968 21:33:37 -- common/autotest_common.sh@936 -- # '[' -z 2878313 ']' 00:18:14.968 21:33:37 -- common/autotest_common.sh@940 -- # kill -0 2878313 00:18:14.968 21:33:37 -- common/autotest_common.sh@941 -- # uname 00:18:14.968 21:33:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:14.968 21:33:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2878313 00:18:15.228 21:33:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:15.228 21:33:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:15.228 21:33:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2878313' 00:18:15.228 killing process with pid 2878313 00:18:15.228 21:33:37 -- common/autotest_common.sh@955 -- # kill 2878313 00:18:15.228 21:33:37 -- common/autotest_common.sh@960 -- # wait 2878313 00:18:15.228 21:33:38 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.3qQaBjvwJ8 00:18:15.228 21:33:38 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:15.228 21:33:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:15.228 21:33:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:15.228 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:18:15.228 21:33:38 -- nvmf/common.sh@470 -- # nvmfpid=2878666 00:18:15.228 21:33:38 -- nvmf/common.sh@471 -- # waitforlisten 2878666 00:18:15.228 21:33:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:15.228 21:33:38 -- common/autotest_common.sh@817 -- # '[' -z 2878666 ']' 00:18:15.228 21:33:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.228 21:33:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:15.228 21:33:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.228 21:33:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:15.228 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:18:15.488 [2024-04-24 21:33:38.128802] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:18:15.488 [2024-04-24 21:33:38.128855] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.488 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.488 [2024-04-24 21:33:38.204103] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.488 [2024-04-24 21:33:38.271654] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.488 [2024-04-24 21:33:38.271698] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.488 [2024-04-24 21:33:38.271711] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.488 [2024-04-24 21:33:38.271735] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.488 [2024-04-24 21:33:38.271743] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.488 [2024-04-24 21:33:38.271767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.057 21:33:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:16.057 21:33:38 -- common/autotest_common.sh@850 -- # return 0 00:18:16.057 21:33:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:16.057 21:33:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:16.057 21:33:38 -- common/autotest_common.sh@10 -- # set +x 00:18:16.317 21:33:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.317 21:33:38 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.3qQaBjvwJ8 00:18:16.317 21:33:38 -- target/tls.sh@49 -- # local key=/tmp/tmp.3qQaBjvwJ8 00:18:16.317 21:33:38 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:16.317 [2024-04-24 21:33:39.117664] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.317 21:33:39 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:16.576 21:33:39 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:16.576 [2024-04-24 21:33:39.458530] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:16.576 [2024-04-24 21:33:39.458744] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.836 21:33:39 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:16.836 malloc0 00:18:16.836 21:33:39 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:17.095 21:33:39 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3qQaBjvwJ8 00:18:17.095 [2024-04-24 21:33:39.923921] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:17.095 21:33:39 -- target/tls.sh@188 -- # bdevperf_pid=2879089 00:18:17.095 21:33:39 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.095 21:33:39 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.095 21:33:39 -- target/tls.sh@191 -- # waitforlisten 2879089 /var/tmp/bdevperf.sock 00:18:17.095 21:33:39 -- common/autotest_common.sh@817 -- # '[' -z 2879089 ']' 00:18:17.095 21:33:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.095 21:33:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:17.095 21:33:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.096 21:33:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:17.096 21:33:39 -- common/autotest_common.sh@10 -- # set +x 00:18:17.355 [2024-04-24 21:33:39.984441] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:18:17.355 [2024-04-24 21:33:39.984501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2879089 ] 00:18:17.355 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.355 [2024-04-24 21:33:40.054017] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.355 [2024-04-24 21:33:40.132753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.926 21:33:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:17.926 21:33:40 -- common/autotest_common.sh@850 -- # return 0 00:18:17.926 21:33:40 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3qQaBjvwJ8 00:18:18.184 [2024-04-24 21:33:40.935020] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.184 [2024-04-24 21:33:40.935094] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:18.184 TLSTESTn1 00:18:18.184 21:33:41 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:18.444 21:33:41 -- target/tls.sh@196 -- # tgtconf='{ 00:18:18.444 "subsystems": [ 00:18:18.444 { 00:18:18.444 "subsystem": "keyring", 00:18:18.444 "config": [] 00:18:18.444 }, 00:18:18.444 { 00:18:18.444 "subsystem": "iobuf", 00:18:18.444 "config": [ 00:18:18.444 { 00:18:18.444 "method": "iobuf_set_options", 00:18:18.444 "params": { 00:18:18.444 "small_pool_count": 8192, 00:18:18.444 "large_pool_count": 1024, 00:18:18.444 "small_bufsize": 8192, 00:18:18.444 "large_bufsize": 135168 00:18:18.444 } 00:18:18.444 } 00:18:18.444 ] 00:18:18.444 }, 00:18:18.444 { 00:18:18.444 "subsystem": "sock", 00:18:18.444 "config": [ 00:18:18.444 { 00:18:18.444 "method": "sock_impl_set_options", 00:18:18.444 "params": { 00:18:18.444 "impl_name": "posix", 00:18:18.444 "recv_buf_size": 2097152, 00:18:18.444 "send_buf_size": 2097152, 00:18:18.444 "enable_recv_pipe": true, 00:18:18.444 "enable_quickack": false, 00:18:18.444 "enable_placement_id": 0, 00:18:18.444 "enable_zerocopy_send_server": true, 00:18:18.444 "enable_zerocopy_send_client": false, 00:18:18.444 "zerocopy_threshold": 0, 00:18:18.444 "tls_version": 0, 00:18:18.444 "enable_ktls": false 00:18:18.444 } 00:18:18.444 }, 00:18:18.444 { 00:18:18.444 "method": "sock_impl_set_options", 00:18:18.444 "params": { 00:18:18.444 "impl_name": "ssl", 00:18:18.444 "recv_buf_size": 4096, 00:18:18.444 "send_buf_size": 4096, 00:18:18.444 "enable_recv_pipe": true, 00:18:18.444 "enable_quickack": false, 00:18:18.444 "enable_placement_id": 0, 00:18:18.444 "enable_zerocopy_send_server": true, 00:18:18.444 "enable_zerocopy_send_client": false, 00:18:18.444 "zerocopy_threshold": 0, 00:18:18.444 "tls_version": 0, 00:18:18.444 "enable_ktls": false 00:18:18.444 } 00:18:18.444 } 00:18:18.444 ] 00:18:18.444 }, 00:18:18.444 { 00:18:18.444 "subsystem": "vmd", 00:18:18.444 "config": [] 00:18:18.444 }, 00:18:18.444 { 00:18:18.444 "subsystem": "accel", 00:18:18.444 "config": [ 00:18:18.444 { 00:18:18.444 "method": "accel_set_options", 00:18:18.444 "params": { 00:18:18.444 "small_cache_size": 128, 00:18:18.444 "large_cache_size": 16, 00:18:18.444 "task_count": 2048, 00:18:18.444 "sequence_count": 2048, 00:18:18.444 "buf_count": 2048 00:18:18.444 } 00:18:18.444 } 00:18:18.444 ] 00:18:18.444 }, 00:18:18.444 { 00:18:18.444 "subsystem": "bdev", 00:18:18.444 "config": [ 00:18:18.444 { 00:18:18.444 "method": "bdev_set_options", 00:18:18.444 "params": { 00:18:18.444 "bdev_io_pool_size": 65535, 00:18:18.444 "bdev_io_cache_size": 256, 00:18:18.444 "bdev_auto_examine": true, 00:18:18.444 "iobuf_small_cache_size": 128, 00:18:18.444 "iobuf_large_cache_size": 16 00:18:18.444 } 00:18:18.444 }, 00:18:18.444 { 00:18:18.444 "method": "bdev_raid_set_options", 00:18:18.444 "params": { 00:18:18.445 "process_window_size_kb": 1024 00:18:18.445 } 00:18:18.445 }, 00:18:18.445 { 00:18:18.445 "method": "bdev_iscsi_set_options", 00:18:18.445 "params": { 00:18:18.445 "timeout_sec": 30 00:18:18.445 } 00:18:18.445 }, 00:18:18.445 { 00:18:18.445 "method": "bdev_nvme_set_options", 00:18:18.445 "params": { 00:18:18.445 "action_on_timeout": "none", 00:18:18.445 "timeout_us": 0, 00:18:18.445 "timeout_admin_us": 0, 00:18:18.445 "keep_alive_timeout_ms": 10000, 00:18:18.445 "arbitration_burst": 0, 00:18:18.445 "low_priority_weight": 0, 00:18:18.445 "medium_priority_weight": 0, 00:18:18.445 "high_priority_weight": 0, 00:18:18.445 "nvme_adminq_poll_period_us": 10000, 00:18:18.445 "nvme_ioq_poll_period_us": 0, 00:18:18.445 "io_queue_requests": 0, 00:18:18.445 "delay_cmd_submit": true, 00:18:18.445 "transport_retry_count": 4, 00:18:18.445 "bdev_retry_count": 3, 00:18:18.445 "transport_ack_timeout": 0, 00:18:18.445 "ctrlr_loss_timeout_sec": 0, 00:18:18.445 "reconnect_delay_sec": 0, 00:18:18.445 "fast_io_fail_timeout_sec": 0, 00:18:18.445 "disable_auto_failback": false, 00:18:18.445 "generate_uuids": false, 00:18:18.445 "transport_tos": 0, 00:18:18.445 "nvme_error_stat": false, 00:18:18.445 "rdma_srq_size": 0, 00:18:18.445 "io_path_stat": false, 00:18:18.445 "allow_accel_sequence": false, 00:18:18.445 "rdma_max_cq_size": 0, 00:18:18.445 "rdma_cm_event_timeout_ms": 0, 00:18:18.445 "dhchap_digests": [ 00:18:18.445 "sha256", 00:18:18.445 "sha384", 00:18:18.445 "sha512" 00:18:18.445 ], 00:18:18.445 "dhchap_dhgroups": [ 00:18:18.445 "null", 00:18:18.445 "ffdhe2048", 00:18:18.445 "ffdhe3072", 00:18:18.445 "ffdhe4096", 00:18:18.445 "ffdhe6144", 00:18:18.445 "ffdhe8192" 00:18:18.445 ] 00:18:18.445 } 00:18:18.445 }, 00:18:18.445 { 00:18:18.445 "method": "bdev_nvme_set_hotplug", 00:18:18.445 "params": { 00:18:18.445 "period_us": 100000, 00:18:18.445 "enable": false 00:18:18.445 } 00:18:18.445 }, 00:18:18.445 { 00:18:18.445 "method": "bdev_malloc_create", 00:18:18.445 "params": { 00:18:18.445 "name": "malloc0", 00:18:18.445 "num_blocks": 8192, 00:18:18.445 "block_size": 4096, 00:18:18.445 "physical_block_size": 4096, 00:18:18.445 "uuid": "7b519eb9-27d7-46be-a9c7-e0aa7d58d1c5", 00:18:18.445 "optimal_io_boundary": 0 00:18:18.445 } 00:18:18.445 }, 00:18:18.445 { 00:18:18.445 "method": "bdev_wait_for_examine" 00:18:18.445 } 00:18:18.445 ] 00:18:18.445 }, 00:18:18.445 { 00:18:18.445 "subsystem": "nbd", 00:18:18.445 "config": [] 00:18:18.445 }, 00:18:18.445 { 00:18:18.445 "subsystem": "scheduler", 00:18:18.445 "config": [ 00:18:18.445 { 00:18:18.445 "method": "framework_set_scheduler", 00:18:18.445 "params": { 00:18:18.445 "name": "static" 00:18:18.445 } 00:18:18.445 } 00:18:18.445 ] 00:18:18.445 }, 00:18:18.445 { 00:18:18.445 "subsystem": "nvmf", 00:18:18.445 "config": [ 00:18:18.445 { 00:18:18.445 "method": "nvmf_set_config", 00:18:18.445 "params": { 00:18:18.445 "discovery_filter": "match_any", 00:18:18.445 "admin_cmd_passthru": { 00:18:18.445 "identify_ctrlr": false 00:18:18.445 } 00:18:18.445 } 00:18:18.445 }, 00:18:18.445 { 00:18:18.445 "method": "nvmf_set_max_subsystems", 00:18:18.445 "params": { 00:18:18.445 "max_subsystems": 1024 00:18:18.445 } 00:18:18.445 }, 00:18:18.445 { 00:18:18.445 "method": "nvmf_set_crdt", 00:18:18.445 "params": { 00:18:18.445 "crdt1": 0, 00:18:18.445 "crdt2": 0, 00:18:18.445 "crdt3": 0 00:18:18.445 } 00:18:18.445 }, 00:18:18.445 { 00:18:18.445 "method": "nvmf_create_transport", 00:18:18.445 "params": { 00:18:18.445 "trtype": "TCP", 00:18:18.445 "max_queue_depth": 128, 00:18:18.445 "max_io_qpairs_per_ctrlr": 127, 00:18:18.445 "in_capsule_data_size": 4096, 00:18:18.445 "max_io_size": 131072, 00:18:18.445 "io_unit_size": 131072, 00:18:18.445 "max_aq_depth": 128, 00:18:18.445 "num_shared_buffers": 511, 00:18:18.445 "buf_cache_size": 4294967295, 00:18:18.445 "dif_insert_or_strip": false, 00:18:18.445 "zcopy": false, 00:18:18.445 "c2h_success": false, 00:18:18.445 "sock_priority": 0, 00:18:18.445 "abort_timeout_sec": 1, 00:18:18.445 "ack_timeout": 0, 00:18:18.445 "data_wr_pool_size": 0 00:18:18.445 } 00:18:18.445 }, 00:18:18.445 { 00:18:18.445 "method": "nvmf_create_subsystem", 00:18:18.445 "params": { 00:18:18.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.445 "allow_any_host": false, 00:18:18.445 "serial_number": "SPDK00000000000001", 00:18:18.445 "model_number": "SPDK bdev Controller", 00:18:18.445 "max_namespaces": 10, 00:18:18.445 "min_cntlid": 1, 00:18:18.445 "max_cntlid": 65519, 00:18:18.445 "ana_reporting": false 00:18:18.445 } 00:18:18.445 }, 00:18:18.445 { 00:18:18.445 "method": "nvmf_subsystem_add_host", 00:18:18.445 "params": { 00:18:18.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.445 "host": "nqn.2016-06.io.spdk:host1", 00:18:18.445 "psk": "/tmp/tmp.3qQaBjvwJ8" 00:18:18.445 } 00:18:18.445 }, 00:18:18.445 { 00:18:18.445 "method": "nvmf_subsystem_add_ns", 00:18:18.445 "params": { 00:18:18.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.445 "namespace": { 00:18:18.445 "nsid": 1, 00:18:18.445 "bdev_name": "malloc0", 00:18:18.445 "nguid": "7B519EB927D746BEA9C7E0AA7D58D1C5", 00:18:18.445 "uuid": "7b519eb9-27d7-46be-a9c7-e0aa7d58d1c5", 00:18:18.445 "no_auto_visible": false 00:18:18.445 } 00:18:18.445 } 00:18:18.445 }, 00:18:18.445 { 00:18:18.445 "method": "nvmf_subsystem_add_listener", 00:18:18.445 "params": { 00:18:18.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.445 "listen_address": { 00:18:18.445 "trtype": "TCP", 00:18:18.445 "adrfam": "IPv4", 00:18:18.445 "traddr": "10.0.0.2", 00:18:18.445 "trsvcid": "4420" 00:18:18.445 }, 00:18:18.445 "secure_channel": true 00:18:18.445 } 00:18:18.445 } 00:18:18.445 ] 00:18:18.445 } 00:18:18.445 ] 00:18:18.445 }' 00:18:18.445 21:33:41 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:18.705 21:33:41 -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:18.705 "subsystems": [ 00:18:18.705 { 00:18:18.705 "subsystem": "keyring", 00:18:18.705 "config": [] 00:18:18.705 }, 00:18:18.705 { 00:18:18.705 "subsystem": "iobuf", 00:18:18.705 "config": [ 00:18:18.705 { 00:18:18.705 "method": "iobuf_set_options", 00:18:18.705 "params": { 00:18:18.705 "small_pool_count": 8192, 00:18:18.705 "large_pool_count": 1024, 00:18:18.705 "small_bufsize": 8192, 00:18:18.705 "large_bufsize": 135168 00:18:18.705 } 00:18:18.705 } 00:18:18.705 ] 00:18:18.705 }, 00:18:18.705 { 00:18:18.705 "subsystem": "sock", 00:18:18.705 "config": [ 00:18:18.705 { 00:18:18.705 "method": "sock_impl_set_options", 00:18:18.705 "params": { 00:18:18.705 "impl_name": "posix", 00:18:18.705 "recv_buf_size": 2097152, 00:18:18.705 "send_buf_size": 2097152, 00:18:18.705 "enable_recv_pipe": true, 00:18:18.705 "enable_quickack": false, 00:18:18.705 "enable_placement_id": 0, 00:18:18.705 "enable_zerocopy_send_server": true, 00:18:18.705 "enable_zerocopy_send_client": false, 00:18:18.705 "zerocopy_threshold": 0, 00:18:18.705 "tls_version": 0, 00:18:18.705 "enable_ktls": false 00:18:18.705 } 00:18:18.705 }, 00:18:18.705 { 00:18:18.705 "method": "sock_impl_set_options", 00:18:18.705 "params": { 00:18:18.705 "impl_name": "ssl", 00:18:18.705 "recv_buf_size": 4096, 00:18:18.705 "send_buf_size": 4096, 00:18:18.705 "enable_recv_pipe": true, 00:18:18.705 "enable_quickack": false, 00:18:18.705 "enable_placement_id": 0, 00:18:18.705 "enable_zerocopy_send_server": true, 00:18:18.705 "enable_zerocopy_send_client": false, 00:18:18.705 "zerocopy_threshold": 0, 00:18:18.705 "tls_version": 0, 00:18:18.705 "enable_ktls": false 00:18:18.705 } 00:18:18.705 } 00:18:18.705 ] 00:18:18.705 }, 00:18:18.705 { 00:18:18.705 "subsystem": "vmd", 00:18:18.705 "config": [] 00:18:18.705 }, 00:18:18.705 { 00:18:18.705 "subsystem": "accel", 00:18:18.705 "config": [ 00:18:18.705 { 00:18:18.705 "method": "accel_set_options", 00:18:18.705 "params": { 00:18:18.705 "small_cache_size": 128, 00:18:18.705 "large_cache_size": 16, 00:18:18.705 "task_count": 2048, 00:18:18.705 "sequence_count": 2048, 00:18:18.705 "buf_count": 2048 00:18:18.705 } 00:18:18.705 } 00:18:18.705 ] 00:18:18.705 }, 00:18:18.705 { 00:18:18.705 "subsystem": "bdev", 00:18:18.705 "config": [ 00:18:18.705 { 00:18:18.705 "method": "bdev_set_options", 00:18:18.705 "params": { 00:18:18.705 "bdev_io_pool_size": 65535, 00:18:18.705 "bdev_io_cache_size": 256, 00:18:18.705 "bdev_auto_examine": true, 00:18:18.705 "iobuf_small_cache_size": 128, 00:18:18.705 "iobuf_large_cache_size": 16 00:18:18.705 } 00:18:18.705 }, 00:18:18.705 { 00:18:18.705 "method": "bdev_raid_set_options", 00:18:18.705 "params": { 00:18:18.705 "process_window_size_kb": 1024 00:18:18.706 } 00:18:18.706 }, 00:18:18.706 { 00:18:18.706 "method": "bdev_iscsi_set_options", 00:18:18.706 "params": { 00:18:18.706 "timeout_sec": 30 00:18:18.706 } 00:18:18.706 }, 00:18:18.706 { 00:18:18.706 "method": "bdev_nvme_set_options", 00:18:18.706 "params": { 00:18:18.706 "action_on_timeout": "none", 00:18:18.706 "timeout_us": 0, 00:18:18.706 "timeout_admin_us": 0, 00:18:18.706 "keep_alive_timeout_ms": 10000, 00:18:18.706 "arbitration_burst": 0, 00:18:18.706 "low_priority_weight": 0, 00:18:18.706 "medium_priority_weight": 0, 00:18:18.706 "high_priority_weight": 0, 00:18:18.706 "nvme_adminq_poll_period_us": 10000, 00:18:18.706 "nvme_ioq_poll_period_us": 0, 00:18:18.706 "io_queue_requests": 512, 00:18:18.706 "delay_cmd_submit": true, 00:18:18.706 "transport_retry_count": 4, 00:18:18.706 "bdev_retry_count": 3, 00:18:18.706 "transport_ack_timeout": 0, 00:18:18.706 "ctrlr_loss_timeout_sec": 0, 00:18:18.706 "reconnect_delay_sec": 0, 00:18:18.706 "fast_io_fail_timeout_sec": 0, 00:18:18.706 "disable_auto_failback": false, 00:18:18.706 "generate_uuids": false, 00:18:18.706 "transport_tos": 0, 00:18:18.706 "nvme_error_stat": false, 00:18:18.706 "rdma_srq_size": 0, 00:18:18.706 "io_path_stat": false, 00:18:18.706 "allow_accel_sequence": false, 00:18:18.706 "rdma_max_cq_size": 0, 00:18:18.706 "rdma_cm_event_timeout_ms": 0, 00:18:18.706 "dhchap_digests": [ 00:18:18.706 "sha256", 00:18:18.706 "sha384", 00:18:18.706 "sha512" 00:18:18.706 ], 00:18:18.706 "dhchap_dhgroups": [ 00:18:18.706 "null", 00:18:18.706 "ffdhe2048", 00:18:18.706 "ffdhe3072", 00:18:18.706 "ffdhe4096", 00:18:18.706 "ffdhe6144", 00:18:18.706 "ffdhe8192" 00:18:18.706 ] 00:18:18.706 } 00:18:18.706 }, 00:18:18.706 { 00:18:18.706 "method": "bdev_nvme_attach_controller", 00:18:18.706 "params": { 00:18:18.706 "name": "TLSTEST", 00:18:18.706 "trtype": "TCP", 00:18:18.706 "adrfam": "IPv4", 00:18:18.706 "traddr": "10.0.0.2", 00:18:18.706 "trsvcid": "4420", 00:18:18.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.706 "prchk_reftag": false, 00:18:18.706 "prchk_guard": false, 00:18:18.706 "ctrlr_loss_timeout_sec": 0, 00:18:18.706 "reconnect_delay_sec": 0, 00:18:18.706 "fast_io_fail_timeout_sec": 0, 00:18:18.706 "psk": "/tmp/tmp.3qQaBjvwJ8", 00:18:18.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.706 "hdgst": false, 00:18:18.706 "ddgst": false 00:18:18.706 } 00:18:18.706 }, 00:18:18.706 { 00:18:18.706 "method": "bdev_nvme_set_hotplug", 00:18:18.706 "params": { 00:18:18.706 "period_us": 100000, 00:18:18.706 "enable": false 00:18:18.706 } 00:18:18.706 }, 00:18:18.706 { 00:18:18.706 "method": "bdev_wait_for_examine" 00:18:18.706 } 00:18:18.706 ] 00:18:18.706 }, 00:18:18.706 { 00:18:18.706 "subsystem": "nbd", 00:18:18.706 "config": [] 00:18:18.706 } 00:18:18.706 ] 00:18:18.706 }' 00:18:18.706 21:33:41 -- target/tls.sh@199 -- # killprocess 2879089 00:18:18.706 21:33:41 -- common/autotest_common.sh@936 -- # '[' -z 2879089 ']' 00:18:18.706 21:33:41 -- common/autotest_common.sh@940 -- # kill -0 2879089 00:18:18.706 21:33:41 -- common/autotest_common.sh@941 -- # uname 00:18:18.706 21:33:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.706 21:33:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2879089 00:18:18.966 21:33:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:18.966 21:33:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:18.966 21:33:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2879089' 00:18:18.966 killing process with pid 2879089 00:18:18.966 21:33:41 -- common/autotest_common.sh@955 -- # kill 2879089 00:18:18.966 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.966 00:18:18.966 Latency(us) 00:18:18.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.966 =================================================================================================================== 00:18:18.966 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:18.966 [2024-04-24 21:33:41.609225] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:18.966 21:33:41 -- common/autotest_common.sh@960 -- # wait 2879089 00:18:18.966 21:33:41 -- target/tls.sh@200 -- # killprocess 2878666 00:18:18.966 21:33:41 -- common/autotest_common.sh@936 -- # '[' -z 2878666 ']' 00:18:18.966 21:33:41 -- common/autotest_common.sh@940 -- # kill -0 2878666 00:18:18.966 21:33:41 -- common/autotest_common.sh@941 -- # uname 00:18:18.966 21:33:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.966 21:33:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2878666 00:18:19.226 21:33:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:19.226 21:33:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:19.226 21:33:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2878666' 00:18:19.226 killing process with pid 2878666 00:18:19.226 21:33:41 -- common/autotest_common.sh@955 -- # kill 2878666 00:18:19.226 [2024-04-24 21:33:41.861759] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:19.226 21:33:41 -- common/autotest_common.sh@960 -- # wait 2878666 00:18:19.226 21:33:42 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:19.226 21:33:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:19.226 21:33:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:19.226 21:33:42 -- target/tls.sh@203 -- # echo '{ 00:18:19.226 "subsystems": [ 00:18:19.226 { 00:18:19.226 "subsystem": "keyring", 00:18:19.226 "config": [] 00:18:19.226 }, 00:18:19.226 { 00:18:19.226 "subsystem": "iobuf", 00:18:19.226 "config": [ 00:18:19.226 { 00:18:19.226 "method": "iobuf_set_options", 00:18:19.226 "params": { 00:18:19.226 "small_pool_count": 8192, 00:18:19.226 "large_pool_count": 1024, 00:18:19.226 "small_bufsize": 8192, 00:18:19.226 "large_bufsize": 135168 00:18:19.226 } 00:18:19.226 } 00:18:19.226 ] 00:18:19.226 }, 00:18:19.226 { 00:18:19.226 "subsystem": "sock", 00:18:19.226 "config": [ 00:18:19.226 { 00:18:19.226 "method": "sock_impl_set_options", 00:18:19.226 "params": { 00:18:19.226 "impl_name": "posix", 00:18:19.226 "recv_buf_size": 2097152, 00:18:19.226 "send_buf_size": 2097152, 00:18:19.226 "enable_recv_pipe": true, 00:18:19.226 "enable_quickack": false, 00:18:19.226 "enable_placement_id": 0, 00:18:19.226 "enable_zerocopy_send_server": true, 00:18:19.226 "enable_zerocopy_send_client": false, 00:18:19.226 "zerocopy_threshold": 0, 00:18:19.226 "tls_version": 0, 00:18:19.226 "enable_ktls": false 00:18:19.226 } 00:18:19.226 }, 00:18:19.226 { 00:18:19.226 "method": "sock_impl_set_options", 00:18:19.226 "params": { 00:18:19.226 "impl_name": "ssl", 00:18:19.226 "recv_buf_size": 4096, 00:18:19.226 "send_buf_size": 4096, 00:18:19.226 "enable_recv_pipe": true, 00:18:19.226 "enable_quickack": false, 00:18:19.226 "enable_placement_id": 0, 00:18:19.226 "enable_zerocopy_send_server": true, 00:18:19.226 "enable_zerocopy_send_client": false, 00:18:19.226 "zerocopy_threshold": 0, 00:18:19.226 "tls_version": 0, 00:18:19.226 "enable_ktls": false 00:18:19.226 } 00:18:19.226 } 00:18:19.226 ] 00:18:19.226 }, 00:18:19.226 { 00:18:19.226 "subsystem": "vmd", 00:18:19.226 "config": [] 00:18:19.226 }, 00:18:19.226 { 00:18:19.226 "subsystem": "accel", 00:18:19.226 "config": [ 00:18:19.226 { 00:18:19.226 "method": "accel_set_options", 00:18:19.226 "params": { 00:18:19.226 "small_cache_size": 128, 00:18:19.226 "large_cache_size": 16, 00:18:19.226 "task_count": 2048, 00:18:19.226 "sequence_count": 2048, 00:18:19.226 "buf_count": 2048 00:18:19.226 } 00:18:19.226 } 00:18:19.226 ] 00:18:19.226 }, 00:18:19.227 { 00:18:19.227 "subsystem": "bdev", 00:18:19.227 "config": [ 00:18:19.227 { 00:18:19.227 "method": "bdev_set_options", 00:18:19.227 "params": { 00:18:19.227 "bdev_io_pool_size": 65535, 00:18:19.227 "bdev_io_cache_size": 256, 00:18:19.227 "bdev_auto_examine": true, 00:18:19.227 "iobuf_small_cache_size": 128, 00:18:19.227 "iobuf_large_cache_size": 16 00:18:19.227 } 00:18:19.227 }, 00:18:19.227 { 00:18:19.227 "method": "bdev_raid_set_options", 00:18:19.227 "params": { 00:18:19.227 "process_window_size_kb": 1024 00:18:19.227 } 00:18:19.227 }, 00:18:19.227 { 00:18:19.227 "method": "bdev_iscsi_set_options", 00:18:19.227 "params": { 00:18:19.227 "timeout_sec": 30 00:18:19.227 } 00:18:19.227 }, 00:18:19.227 { 00:18:19.227 "method": "bdev_nvme_set_options", 00:18:19.227 "params": { 00:18:19.227 "action_on_timeout": "none", 00:18:19.227 "timeout_us": 0, 00:18:19.227 "timeout_admin_us": 0, 00:18:19.227 "keep_alive_timeout_ms": 10000, 00:18:19.227 "arbitration_burst": 0, 00:18:19.227 "low_priority_weight": 0, 00:18:19.227 "medium_priority_weight": 0, 00:18:19.227 "high_priority_weight": 0, 00:18:19.227 "nvme_adminq_poll_period_us": 10000, 00:18:19.227 "nvme_ioq_poll_period_us": 0, 00:18:19.227 "io_queue_requests": 0, 00:18:19.227 "delay_cmd_submit": true, 00:18:19.227 "transport_retry_count": 4, 00:18:19.227 "bdev_retry_count": 3, 00:18:19.227 "transport_ack_timeout": 0, 00:18:19.227 "ctrlr_loss_timeout_sec": 0, 00:18:19.227 "reconnect_delay_sec": 0, 00:18:19.227 "fast_io_fail_timeout_sec": 0, 00:18:19.227 "disable_auto_failback": false, 00:18:19.227 "generate_uuids": false, 00:18:19.227 "transport_tos": 0, 00:18:19.227 "nvme_error_stat": false, 00:18:19.227 "rdma_srq_size": 0, 00:18:19.227 "io_path_stat": false, 00:18:19.227 "allow_accel_sequence": false, 00:18:19.227 "rdma_max_cq_size": 0, 00:18:19.227 "rdma_cm_event_timeout_ms": 0, 00:18:19.227 "dhchap_digests": [ 00:18:19.227 "sha256", 00:18:19.227 "sha384", 00:18:19.227 "sha512" 00:18:19.227 ], 00:18:19.227 "dhchap_dhgroups": [ 00:18:19.227 "null", 00:18:19.227 "ffdhe2048", 00:18:19.227 "ffdhe3072", 00:18:19.227 "ffdhe4096", 00:18:19.227 "ffdhe6144", 00:18:19.227 "ffdhe8192" 00:18:19.227 ] 00:18:19.227 } 00:18:19.227 }, 00:18:19.227 { 00:18:19.227 "method": "bdev_nvme_set_hotplug", 00:18:19.227 "params": { 00:18:19.227 "period_us": 100000, 00:18:19.227 "enable": false 00:18:19.227 } 00:18:19.227 }, 00:18:19.227 { 00:18:19.227 "method": "bdev_malloc_create", 00:18:19.227 "params": { 00:18:19.227 "name": "malloc0", 00:18:19.227 "num_blocks": 8192, 00:18:19.227 "block_size": 4096, 00:18:19.227 "physical_block_size": 4096, 00:18:19.227 "uuid": "7b519eb9-27d7-46be-a9c7-e0aa7d58d1c5", 00:18:19.227 "optimal_io_boundary": 0 00:18:19.227 } 00:18:19.227 }, 00:18:19.227 { 00:18:19.227 "method": "bdev_wait_for_examine" 00:18:19.227 } 00:18:19.227 ] 00:18:19.227 }, 00:18:19.227 { 00:18:19.227 "subsystem": "nbd", 00:18:19.227 "config": [] 00:18:19.227 }, 00:18:19.227 { 00:18:19.227 "subsystem": "scheduler", 00:18:19.227 "config": [ 00:18:19.227 { 00:18:19.227 "method": "framework_set_scheduler", 00:18:19.227 "params": { 00:18:19.227 "name": "static" 00:18:19.227 } 00:18:19.227 } 00:18:19.227 ] 00:18:19.227 }, 00:18:19.227 { 00:18:19.227 "subsystem": "nvmf", 00:18:19.227 "config": [ 00:18:19.227 { 00:18:19.227 "method": "nvmf_set_config", 00:18:19.227 "params": { 00:18:19.227 "discovery_filter": "match_any", 00:18:19.227 "admin_cmd_passthru": { 00:18:19.227 "identify_ctrlr": false 00:18:19.227 } 00:18:19.227 } 00:18:19.227 }, 00:18:19.227 { 00:18:19.227 "method": "nvmf_set_max_subsystems", 00:18:19.227 "params": { 00:18:19.227 "max_subsystems": 1024 00:18:19.227 } 00:18:19.227 }, 00:18:19.227 { 00:18:19.227 "method": "nvmf_set_crdt", 00:18:19.227 "params": { 00:18:19.227 "crdt1": 0, 00:18:19.227 "crdt2": 0, 00:18:19.227 "crdt3": 0 00:18:19.227 } 00:18:19.227 }, 00:18:19.227 { 00:18:19.227 "method": "nvmf_create_transport", 00:18:19.227 "params": { 00:18:19.227 "trtype": "TCP", 00:18:19.227 "max_queue_depth": 128, 00:18:19.227 "max_io_qpairs_per_ctrlr": 127, 00:18:19.227 "in_capsule_data_size": 4096, 00:18:19.227 "max_io_size": 131072, 00:18:19.227 "io_unit_size": 131072, 00:18:19.227 "max_aq_depth": 128, 00:18:19.227 "num_shared_buffers": 511, 00:18:19.227 "buf_cache_size": 4294967295, 00:18:19.227 "dif_insert_or_strip": false, 00:18:19.227 "zcopy": false, 00:18:19.227 "c2h_success": false, 00:18:19.227 "sock_priority": 0, 00:18:19.227 "abort_timeout_sec": 1, 00:18:19.227 "ack_timeout": 0, 00:18:19.227 "data_wr_pool_size": 0 00:18:19.227 } 00:18:19.227 }, 00:18:19.227 { 00:18:19.227 "method": "nvmf_create_subsystem", 00:18:19.227 "params": { 00:18:19.227 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.227 "allow_any_host": false, 00:18:19.227 "serial_number": "SPDK00000000000001", 00:18:19.227 "model_number": "SPDK bdev Controller", 00:18:19.227 "max_namespaces": 10, 00:18:19.227 "min_cntlid": 1, 00:18:19.227 "max_cntlid": 65519, 00:18:19.227 "ana_reporting": false 00:18:19.227 } 00:18:19.227 }, 00:18:19.227 { 00:18:19.227 "method": "nvmf_subsystem_add_host", 00:18:19.227 "params": { 00:18:19.227 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.227 "host": "nqn.2016-06.io.spdk:host1", 00:18:19.227 "psk": "/tmp/tmp.3qQaBjvwJ8" 00:18:19.227 } 00:18:19.227 }, 00:18:19.227 { 00:18:19.227 "method": "nvmf_subsystem_add_ns", 00:18:19.227 "params": { 00:18:19.227 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.227 "namespace": { 00:18:19.227 "nsid": 1, 00:18:19.227 "bdev_name": "malloc0", 00:18:19.227 "nguid": "7B519EB927D746BEA9C7E0AA7D58D1C5", 00:18:19.227 "uuid": "7b519eb9-27d7-46be-a9c7-e0aa7d58d1c5", 00:18:19.227 "no_auto_visible": false 00:18:19.227 } 00:18:19.227 } 00:18:19.227 }, 00:18:19.227 { 00:18:19.227 "method": "nvmf_subsystem_add_listener", 00:18:19.227 "params": { 00:18:19.227 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.227 "listen_address": { 00:18:19.227 "trtype": "TCP", 00:18:19.227 "adrfam": "IPv4", 00:18:19.227 "traddr": "10.0.0.2", 00:18:19.227 "trsvcid": "4420" 00:18:19.227 }, 00:18:19.227 "secure_channel": true 00:18:19.227 } 00:18:19.227 } 00:18:19.227 ] 00:18:19.227 } 00:18:19.227 ] 00:18:19.227 }' 00:18:19.227 21:33:42 -- common/autotest_common.sh@10 -- # set +x 00:18:19.227 21:33:42 -- nvmf/common.sh@470 -- # nvmfpid=2879449 00:18:19.227 21:33:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:19.227 21:33:42 -- nvmf/common.sh@471 -- # waitforlisten 2879449 00:18:19.227 21:33:42 -- common/autotest_common.sh@817 -- # '[' -z 2879449 ']' 00:18:19.227 21:33:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.227 21:33:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:19.227 21:33:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.227 21:33:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:19.227 21:33:42 -- common/autotest_common.sh@10 -- # set +x 00:18:19.487 [2024-04-24 21:33:42.124517] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:18:19.487 [2024-04-24 21:33:42.124566] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.487 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.487 [2024-04-24 21:33:42.196171] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.488 [2024-04-24 21:33:42.267251] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.488 [2024-04-24 21:33:42.267287] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.488 [2024-04-24 21:33:42.267297] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.488 [2024-04-24 21:33:42.267305] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.488 [2024-04-24 21:33:42.267313] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.488 [2024-04-24 21:33:42.267377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.747 [2024-04-24 21:33:42.461937] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.747 [2024-04-24 21:33:42.477911] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:19.747 [2024-04-24 21:33:42.493960] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:19.747 [2024-04-24 21:33:42.502581] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.317 21:33:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:20.317 21:33:42 -- common/autotest_common.sh@850 -- # return 0 00:18:20.317 21:33:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:20.317 21:33:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:20.317 21:33:42 -- common/autotest_common.sh@10 -- # set +x 00:18:20.317 21:33:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.317 21:33:42 -- target/tls.sh@207 -- # bdevperf_pid=2879606 00:18:20.317 21:33:42 -- target/tls.sh@208 -- # waitforlisten 2879606 /var/tmp/bdevperf.sock 00:18:20.317 21:33:42 -- common/autotest_common.sh@817 -- # '[' -z 2879606 ']' 00:18:20.317 21:33:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.317 21:33:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:20.317 21:33:42 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:20.317 21:33:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.317 21:33:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:20.317 21:33:42 -- target/tls.sh@204 -- # echo '{ 00:18:20.317 "subsystems": [ 00:18:20.317 { 00:18:20.317 "subsystem": "keyring", 00:18:20.317 "config": [] 00:18:20.317 }, 00:18:20.317 { 00:18:20.317 "subsystem": "iobuf", 00:18:20.317 "config": [ 00:18:20.317 { 00:18:20.317 "method": "iobuf_set_options", 00:18:20.317 "params": { 00:18:20.317 "small_pool_count": 8192, 00:18:20.317 "large_pool_count": 1024, 00:18:20.317 "small_bufsize": 8192, 00:18:20.317 "large_bufsize": 135168 00:18:20.317 } 00:18:20.317 } 00:18:20.317 ] 00:18:20.317 }, 00:18:20.317 { 00:18:20.317 "subsystem": "sock", 00:18:20.317 "config": [ 00:18:20.317 { 00:18:20.317 "method": "sock_impl_set_options", 00:18:20.317 "params": { 00:18:20.317 "impl_name": "posix", 00:18:20.317 "recv_buf_size": 2097152, 00:18:20.317 "send_buf_size": 2097152, 00:18:20.317 "enable_recv_pipe": true, 00:18:20.317 "enable_quickack": false, 00:18:20.317 "enable_placement_id": 0, 00:18:20.317 "enable_zerocopy_send_server": true, 00:18:20.317 "enable_zerocopy_send_client": false, 00:18:20.317 "zerocopy_threshold": 0, 00:18:20.317 "tls_version": 0, 00:18:20.317 "enable_ktls": false 00:18:20.317 } 00:18:20.317 }, 00:18:20.317 { 00:18:20.317 "method": "sock_impl_set_options", 00:18:20.317 "params": { 00:18:20.317 "impl_name": "ssl", 00:18:20.317 "recv_buf_size": 4096, 00:18:20.317 "send_buf_size": 4096, 00:18:20.317 "enable_recv_pipe": true, 00:18:20.317 "enable_quickack": false, 00:18:20.317 "enable_placement_id": 0, 00:18:20.317 "enable_zerocopy_send_server": true, 00:18:20.317 "enable_zerocopy_send_client": false, 00:18:20.317 "zerocopy_threshold": 0, 00:18:20.317 "tls_version": 0, 00:18:20.317 "enable_ktls": false 00:18:20.317 } 00:18:20.317 } 00:18:20.317 ] 00:18:20.317 }, 00:18:20.317 { 00:18:20.317 "subsystem": "vmd", 00:18:20.317 "config": [] 00:18:20.317 }, 00:18:20.317 { 00:18:20.317 "subsystem": "accel", 00:18:20.317 "config": [ 00:18:20.317 { 00:18:20.317 "method": "accel_set_options", 00:18:20.317 "params": { 00:18:20.317 "small_cache_size": 128, 00:18:20.317 "large_cache_size": 16, 00:18:20.317 "task_count": 2048, 00:18:20.317 "sequence_count": 2048, 00:18:20.317 "buf_count": 2048 00:18:20.318 } 00:18:20.318 } 00:18:20.318 ] 00:18:20.318 }, 00:18:20.318 { 00:18:20.318 "subsystem": "bdev", 00:18:20.318 "config": [ 00:18:20.318 { 00:18:20.318 "method": "bdev_set_options", 00:18:20.318 "params": { 00:18:20.318 "bdev_io_pool_size": 65535, 00:18:20.318 "bdev_io_cache_size": 256, 00:18:20.318 "bdev_auto_examine": true, 00:18:20.318 "iobuf_small_cache_size": 128, 00:18:20.318 "iobuf_large_cache_size": 16 00:18:20.318 } 00:18:20.318 }, 00:18:20.318 { 00:18:20.318 "method": "bdev_raid_set_options", 00:18:20.318 "params": { 00:18:20.318 "process_window_size_kb": 1024 00:18:20.318 } 00:18:20.318 }, 00:18:20.318 { 00:18:20.318 "method": "bdev_iscsi_set_options", 00:18:20.318 "params": { 00:18:20.318 "timeout_sec": 30 00:18:20.318 } 00:18:20.318 }, 00:18:20.318 { 00:18:20.318 "method": "bdev_nvme_set_options", 00:18:20.318 "params": { 00:18:20.318 "action_on_timeout": "none", 00:18:20.318 "timeout_us": 0, 00:18:20.318 "timeout_admin_us": 0, 00:18:20.318 "keep_alive_timeout_ms": 10000, 00:18:20.318 "arbitration_burst": 0, 00:18:20.318 "low_priority_weight": 0, 00:18:20.318 "medium_priority_weight": 0, 00:18:20.318 "high_priority_weight": 0, 00:18:20.318 "nvme_adminq_poll_period_us": 10000, 00:18:20.318 "nvme_ioq_poll_period_us": 0, 00:18:20.318 "io_queue_requests": 512, 00:18:20.318 "delay_cmd_submit": true, 00:18:20.318 "transport_retry_count": 4, 00:18:20.318 "bdev_retry_count": 3, 00:18:20.318 "transport_ack_timeout": 0, 00:18:20.318 "ctrlr_loss_timeout_sec": 0, 00:18:20.318 "reconnect_delay_sec": 0, 00:18:20.318 "fast_io_fail_timeout_sec": 0, 00:18:20.318 "disable_auto_failback": false, 00:18:20.318 "generate_uuids": false, 00:18:20.318 "transport_tos": 0, 00:18:20.318 "nvme_error_stat": false, 00:18:20.318 "rdma_srq_size": 0, 00:18:20.318 "io_path_stat": false, 00:18:20.318 "allow_accel_sequence": false, 00:18:20.318 "rdma_max_cq_size": 0, 00:18:20.318 "rdma_cm_event_timeout_ms": 0, 00:18:20.318 "dhchap_digests": [ 00:18:20.318 "sha256", 00:18:20.318 "sha384", 00:18:20.318 "sha512" 00:18:20.318 ], 00:18:20.318 "dhchap_dhgroups": [ 00:18:20.318 "null", 00:18:20.318 "ffdhe2048", 00:18:20.318 "ffdhe3072", 00:18:20.318 "ffdhe4096", 00:18:20.318 "ffdhe6144", 00:18:20.318 "ffdhe8192" 00:18:20.318 ] 00:18:20.318 } 00:18:20.318 }, 00:18:20.318 { 00:18:20.318 "method": "bdev_nvme_attach_controller", 00:18:20.318 "params": { 00:18:20.318 "name": "TLSTEST", 00:18:20.318 "trtype": "TCP", 00:18:20.318 "adrfam": "IPv4", 00:18:20.318 "traddr": "10.0.0.2", 00:18:20.318 "trsvcid": "4420", 00:18:20.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.318 "prchk_reftag": false, 00:18:20.318 "prchk_guard": false, 00:18:20.318 "ctrlr_loss_timeout_sec": 0, 00:18:20.318 "reconnect_delay_sec": 0, 00:18:20.318 "fast_io_fail_timeout_sec": 0, 00:18:20.318 "psk": "/tmp/tmp.3qQaBjvwJ8", 00:18:20.318 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.318 "hdgst": false, 00:18:20.318 "ddgst": false 00:18:20.318 } 00:18:20.318 }, 00:18:20.318 { 00:18:20.318 "method": "bdev_nvme_set_hotplug", 00:18:20.318 "params": { 00:18:20.318 "period_us": 100000, 00:18:20.318 "enable": false 00:18:20.318 } 00:18:20.318 }, 00:18:20.318 { 00:18:20.318 "method": "bdev_wait_for_examine" 00:18:20.318 } 00:18:20.318 ] 00:18:20.318 }, 00:18:20.318 { 00:18:20.318 "subsystem": "nbd", 00:18:20.318 "config": [] 00:18:20.318 } 00:18:20.318 ] 00:18:20.318 }' 00:18:20.318 21:33:42 -- common/autotest_common.sh@10 -- # set +x 00:18:20.318 [2024-04-24 21:33:43.003279] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:18:20.318 [2024-04-24 21:33:43.003328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2879606 ] 00:18:20.318 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.318 [2024-04-24 21:33:43.068579] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.318 [2024-04-24 21:33:43.136100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.577 [2024-04-24 21:33:43.269583] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.577 [2024-04-24 21:33:43.269678] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:21.146 21:33:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:21.146 21:33:43 -- common/autotest_common.sh@850 -- # return 0 00:18:21.146 21:33:43 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:21.146 Running I/O for 10 seconds... 00:18:31.129 00:18:31.129 Latency(us) 00:18:31.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.129 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:31.129 Verification LBA range: start 0x0 length 0x2000 00:18:31.129 TLSTESTn1 : 10.06 1695.39 6.62 0.00 0.00 75305.02 6920.60 121634.82 00:18:31.129 =================================================================================================================== 00:18:31.129 Total : 1695.39 6.62 0.00 0.00 75305.02 6920.60 121634.82 00:18:31.129 0 00:18:31.129 21:33:53 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:31.129 21:33:53 -- target/tls.sh@214 -- # killprocess 2879606 00:18:31.129 21:33:53 -- common/autotest_common.sh@936 -- # '[' -z 2879606 ']' 00:18:31.129 21:33:53 -- common/autotest_common.sh@940 -- # kill -0 2879606 00:18:31.129 21:33:53 -- common/autotest_common.sh@941 -- # uname 00:18:31.129 21:33:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:31.130 21:33:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2879606 00:18:31.389 21:33:54 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:31.389 21:33:54 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:31.389 21:33:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2879606' 00:18:31.389 killing process with pid 2879606 00:18:31.389 21:33:54 -- common/autotest_common.sh@955 -- # kill 2879606 00:18:31.389 Received shutdown signal, test time was about 10.000000 seconds 00:18:31.389 00:18:31.389 Latency(us) 00:18:31.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.389 =================================================================================================================== 00:18:31.389 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:31.389 [2024-04-24 21:33:54.032720] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:31.389 21:33:54 -- common/autotest_common.sh@960 -- # wait 2879606 00:18:31.389 21:33:54 -- target/tls.sh@215 -- # killprocess 2879449 00:18:31.389 21:33:54 -- common/autotest_common.sh@936 -- # '[' -z 2879449 ']' 00:18:31.389 21:33:54 -- common/autotest_common.sh@940 -- # kill -0 2879449 00:18:31.389 21:33:54 -- common/autotest_common.sh@941 -- # uname 00:18:31.389 21:33:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:31.389 21:33:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2879449 00:18:31.649 21:33:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:31.649 21:33:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:31.649 21:33:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2879449' 00:18:31.649 killing process with pid 2879449 00:18:31.649 21:33:54 -- common/autotest_common.sh@955 -- # kill 2879449 00:18:31.649 [2024-04-24 21:33:54.290693] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:31.649 21:33:54 -- common/autotest_common.sh@960 -- # wait 2879449 00:18:31.649 21:33:54 -- target/tls.sh@218 -- # nvmfappstart 00:18:31.649 21:33:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:31.649 21:33:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:31.649 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:18:31.649 21:33:54 -- nvmf/common.sh@470 -- # nvmfpid=2881606 00:18:31.649 21:33:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:31.649 21:33:54 -- nvmf/common.sh@471 -- # waitforlisten 2881606 00:18:31.649 21:33:54 -- common/autotest_common.sh@817 -- # '[' -z 2881606 ']' 00:18:31.649 21:33:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.649 21:33:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:31.649 21:33:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.649 21:33:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:31.649 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:18:31.909 [2024-04-24 21:33:54.555590] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:18:31.909 [2024-04-24 21:33:54.555638] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.909 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.909 [2024-04-24 21:33:54.628276] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.909 [2024-04-24 21:33:54.700104] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.909 [2024-04-24 21:33:54.700141] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.909 [2024-04-24 21:33:54.700151] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.909 [2024-04-24 21:33:54.700160] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.909 [2024-04-24 21:33:54.700167] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.909 [2024-04-24 21:33:54.700192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.476 21:33:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:32.476 21:33:55 -- common/autotest_common.sh@850 -- # return 0 00:18:32.476 21:33:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:32.476 21:33:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:32.476 21:33:55 -- common/autotest_common.sh@10 -- # set +x 00:18:32.735 21:33:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.735 21:33:55 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.3qQaBjvwJ8 00:18:32.735 21:33:55 -- target/tls.sh@49 -- # local key=/tmp/tmp.3qQaBjvwJ8 00:18:32.735 21:33:55 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:32.735 [2024-04-24 21:33:55.526232] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.735 21:33:55 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:32.994 21:33:55 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:32.994 [2024-04-24 21:33:55.871105] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:32.994 [2024-04-24 21:33:55.871320] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.253 21:33:55 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:33.253 malloc0 00:18:33.253 21:33:56 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:33.511 21:33:56 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3qQaBjvwJ8 00:18:33.511 [2024-04-24 21:33:56.384770] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:33.771 21:33:56 -- target/tls.sh@222 -- # bdevperf_pid=2881897 00:18:33.771 21:33:56 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:33.771 21:33:56 -- target/tls.sh@225 -- # waitforlisten 2881897 /var/tmp/bdevperf.sock 00:18:33.771 21:33:56 -- common/autotest_common.sh@817 -- # '[' -z 2881897 ']' 00:18:33.771 21:33:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.771 21:33:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:33.771 21:33:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.771 21:33:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:33.771 21:33:56 -- common/autotest_common.sh@10 -- # set +x 00:18:33.771 21:33:56 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:33.771 [2024-04-24 21:33:56.444522] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:18:33.771 [2024-04-24 21:33:56.444570] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881897 ] 00:18:33.771 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.771 [2024-04-24 21:33:56.513694] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.771 [2024-04-24 21:33:56.581463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.710 21:33:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:34.710 21:33:57 -- common/autotest_common.sh@850 -- # return 0 00:18:34.710 21:33:57 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3qQaBjvwJ8 00:18:34.710 21:33:57 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:34.710 [2024-04-24 21:33:57.563970] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:34.970 nvme0n1 00:18:34.970 21:33:57 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:34.970 Running I/O for 1 seconds... 00:18:36.355 00:18:36.355 Latency(us) 00:18:36.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.355 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:36.355 Verification LBA range: start 0x0 length 0x2000 00:18:36.355 nvme0n1 : 1.07 1301.41 5.08 0.00 0.00 96016.30 6973.03 145961.78 00:18:36.355 =================================================================================================================== 00:18:36.355 Total : 1301.41 5.08 0.00 0.00 96016.30 6973.03 145961.78 00:18:36.355 0 00:18:36.355 21:33:58 -- target/tls.sh@234 -- # killprocess 2881897 00:18:36.355 21:33:58 -- common/autotest_common.sh@936 -- # '[' -z 2881897 ']' 00:18:36.355 21:33:58 -- common/autotest_common.sh@940 -- # kill -0 2881897 00:18:36.355 21:33:58 -- common/autotest_common.sh@941 -- # uname 00:18:36.355 21:33:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:36.355 21:33:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2881897 00:18:36.355 21:33:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:36.355 21:33:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:36.355 21:33:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2881897' 00:18:36.355 killing process with pid 2881897 00:18:36.355 21:33:58 -- common/autotest_common.sh@955 -- # kill 2881897 00:18:36.355 Received shutdown signal, test time was about 1.000000 seconds 00:18:36.355 00:18:36.355 Latency(us) 00:18:36.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.355 =================================================================================================================== 00:18:36.355 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.355 21:33:58 -- common/autotest_common.sh@960 -- # wait 2881897 00:18:36.355 21:33:59 -- target/tls.sh@235 -- # killprocess 2881606 00:18:36.355 21:33:59 -- common/autotest_common.sh@936 -- # '[' -z 2881606 ']' 00:18:36.355 21:33:59 -- common/autotest_common.sh@940 -- # kill -0 2881606 00:18:36.355 21:33:59 -- common/autotest_common.sh@941 -- # uname 00:18:36.355 21:33:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:36.355 21:33:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2881606 00:18:36.355 21:33:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:36.355 21:33:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:36.355 21:33:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2881606' 00:18:36.355 killing process with pid 2881606 00:18:36.355 21:33:59 -- common/autotest_common.sh@955 -- # kill 2881606 00:18:36.355 [2024-04-24 21:33:59.149013] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:36.355 21:33:59 -- common/autotest_common.sh@960 -- # wait 2881606 00:18:36.615 21:33:59 -- target/tls.sh@238 -- # nvmfappstart 00:18:36.615 21:33:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:36.615 21:33:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:36.615 21:33:59 -- common/autotest_common.sh@10 -- # set +x 00:18:36.615 21:33:59 -- nvmf/common.sh@470 -- # nvmfpid=2882441 00:18:36.615 21:33:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:36.615 21:33:59 -- nvmf/common.sh@471 -- # waitforlisten 2882441 00:18:36.615 21:33:59 -- common/autotest_common.sh@817 -- # '[' -z 2882441 ']' 00:18:36.615 21:33:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.615 21:33:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:36.615 21:33:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.615 21:33:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:36.615 21:33:59 -- common/autotest_common.sh@10 -- # set +x 00:18:36.615 [2024-04-24 21:33:59.402726] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:18:36.615 [2024-04-24 21:33:59.402776] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.615 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.615 [2024-04-24 21:33:59.476126] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.874 [2024-04-24 21:33:59.549528] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.874 [2024-04-24 21:33:59.549569] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.874 [2024-04-24 21:33:59.549578] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.874 [2024-04-24 21:33:59.549587] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.874 [2024-04-24 21:33:59.549594] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.874 [2024-04-24 21:33:59.549618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.443 21:34:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:37.443 21:34:00 -- common/autotest_common.sh@850 -- # return 0 00:18:37.443 21:34:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:37.443 21:34:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:37.443 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:18:37.443 21:34:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.443 21:34:00 -- target/tls.sh@239 -- # rpc_cmd 00:18:37.443 21:34:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:37.443 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:18:37.443 [2024-04-24 21:34:00.256379] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.443 malloc0 00:18:37.443 [2024-04-24 21:34:00.284839] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:37.443 [2024-04-24 21:34:00.285054] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.443 21:34:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:37.443 21:34:00 -- target/tls.sh@252 -- # bdevperf_pid=2882539 00:18:37.443 21:34:00 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:37.443 21:34:00 -- target/tls.sh@254 -- # waitforlisten 2882539 /var/tmp/bdevperf.sock 00:18:37.443 21:34:00 -- common/autotest_common.sh@817 -- # '[' -z 2882539 ']' 00:18:37.443 21:34:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.443 21:34:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:37.443 21:34:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.443 21:34:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:37.443 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:18:37.701 [2024-04-24 21:34:00.360945] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:18:37.701 [2024-04-24 21:34:00.360989] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882539 ] 00:18:37.701 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.701 [2024-04-24 21:34:00.431902] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.701 [2024-04-24 21:34:00.501100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.269 21:34:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:38.269 21:34:01 -- common/autotest_common.sh@850 -- # return 0 00:18:38.269 21:34:01 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3qQaBjvwJ8 00:18:38.528 21:34:01 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:38.787 [2024-04-24 21:34:01.483797] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.787 nvme0n1 00:18:38.787 21:34:01 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:39.046 Running I/O for 1 seconds... 00:18:39.982 00:18:39.982 Latency(us) 00:18:39.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.982 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:39.982 Verification LBA range: start 0x0 length 0x2000 00:18:39.982 nvme0n1 : 1.06 1464.24 5.72 0.00 0.00 85440.47 6973.03 116601.65 00:18:39.982 =================================================================================================================== 00:18:39.982 Total : 1464.24 5.72 0.00 0.00 85440.47 6973.03 116601.65 00:18:39.982 0 00:18:39.982 21:34:02 -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:39.982 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:39.982 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:18:40.242 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:40.242 21:34:02 -- target/tls.sh@263 -- # tgtcfg='{ 00:18:40.242 "subsystems": [ 00:18:40.242 { 00:18:40.242 "subsystem": "keyring", 00:18:40.242 "config": [ 00:18:40.242 { 00:18:40.242 "method": "keyring_file_add_key", 00:18:40.242 "params": { 00:18:40.242 "name": "key0", 00:18:40.242 "path": "/tmp/tmp.3qQaBjvwJ8" 00:18:40.242 } 00:18:40.242 } 00:18:40.242 ] 00:18:40.242 }, 00:18:40.242 { 00:18:40.242 "subsystem": "iobuf", 00:18:40.242 "config": [ 00:18:40.242 { 00:18:40.242 "method": "iobuf_set_options", 00:18:40.242 "params": { 00:18:40.242 "small_pool_count": 8192, 00:18:40.242 "large_pool_count": 1024, 00:18:40.242 "small_bufsize": 8192, 00:18:40.242 "large_bufsize": 135168 00:18:40.242 } 00:18:40.242 } 00:18:40.242 ] 00:18:40.242 }, 00:18:40.242 { 00:18:40.242 "subsystem": "sock", 00:18:40.242 "config": [ 00:18:40.242 { 00:18:40.242 "method": "sock_impl_set_options", 00:18:40.242 "params": { 00:18:40.242 "impl_name": "posix", 00:18:40.242 "recv_buf_size": 2097152, 00:18:40.242 "send_buf_size": 2097152, 00:18:40.242 "enable_recv_pipe": true, 00:18:40.242 "enable_quickack": false, 00:18:40.242 "enable_placement_id": 0, 00:18:40.242 "enable_zerocopy_send_server": true, 00:18:40.242 "enable_zerocopy_send_client": false, 00:18:40.242 "zerocopy_threshold": 0, 00:18:40.242 "tls_version": 0, 00:18:40.242 "enable_ktls": false 00:18:40.242 } 00:18:40.242 }, 00:18:40.242 { 00:18:40.242 "method": "sock_impl_set_options", 00:18:40.242 "params": { 00:18:40.242 "impl_name": "ssl", 00:18:40.242 "recv_buf_size": 4096, 00:18:40.242 "send_buf_size": 4096, 00:18:40.242 "enable_recv_pipe": true, 00:18:40.242 "enable_quickack": false, 00:18:40.242 "enable_placement_id": 0, 00:18:40.242 "enable_zerocopy_send_server": true, 00:18:40.242 "enable_zerocopy_send_client": false, 00:18:40.242 "zerocopy_threshold": 0, 00:18:40.242 "tls_version": 0, 00:18:40.242 "enable_ktls": false 00:18:40.242 } 00:18:40.242 } 00:18:40.242 ] 00:18:40.242 }, 00:18:40.242 { 00:18:40.242 "subsystem": "vmd", 00:18:40.242 "config": [] 00:18:40.242 }, 00:18:40.242 { 00:18:40.242 "subsystem": "accel", 00:18:40.242 "config": [ 00:18:40.242 { 00:18:40.242 "method": "accel_set_options", 00:18:40.242 "params": { 00:18:40.242 "small_cache_size": 128, 00:18:40.242 "large_cache_size": 16, 00:18:40.242 "task_count": 2048, 00:18:40.242 "sequence_count": 2048, 00:18:40.242 "buf_count": 2048 00:18:40.242 } 00:18:40.242 } 00:18:40.242 ] 00:18:40.242 }, 00:18:40.242 { 00:18:40.242 "subsystem": "bdev", 00:18:40.242 "config": [ 00:18:40.242 { 00:18:40.242 "method": "bdev_set_options", 00:18:40.242 "params": { 00:18:40.242 "bdev_io_pool_size": 65535, 00:18:40.242 "bdev_io_cache_size": 256, 00:18:40.242 "bdev_auto_examine": true, 00:18:40.242 "iobuf_small_cache_size": 128, 00:18:40.242 "iobuf_large_cache_size": 16 00:18:40.242 } 00:18:40.242 }, 00:18:40.242 { 00:18:40.242 "method": "bdev_raid_set_options", 00:18:40.242 "params": { 00:18:40.242 "process_window_size_kb": 1024 00:18:40.242 } 00:18:40.242 }, 00:18:40.242 { 00:18:40.242 "method": "bdev_iscsi_set_options", 00:18:40.242 "params": { 00:18:40.242 "timeout_sec": 30 00:18:40.242 } 00:18:40.242 }, 00:18:40.242 { 00:18:40.242 "method": "bdev_nvme_set_options", 00:18:40.242 "params": { 00:18:40.242 "action_on_timeout": "none", 00:18:40.242 "timeout_us": 0, 00:18:40.242 "timeout_admin_us": 0, 00:18:40.242 "keep_alive_timeout_ms": 10000, 00:18:40.242 "arbitration_burst": 0, 00:18:40.242 "low_priority_weight": 0, 00:18:40.242 "medium_priority_weight": 0, 00:18:40.242 "high_priority_weight": 0, 00:18:40.242 "nvme_adminq_poll_period_us": 10000, 00:18:40.242 "nvme_ioq_poll_period_us": 0, 00:18:40.242 "io_queue_requests": 0, 00:18:40.242 "delay_cmd_submit": true, 00:18:40.242 "transport_retry_count": 4, 00:18:40.242 "bdev_retry_count": 3, 00:18:40.242 "transport_ack_timeout": 0, 00:18:40.242 "ctrlr_loss_timeout_sec": 0, 00:18:40.242 "reconnect_delay_sec": 0, 00:18:40.242 "fast_io_fail_timeout_sec": 0, 00:18:40.242 "disable_auto_failback": false, 00:18:40.242 "generate_uuids": false, 00:18:40.242 "transport_tos": 0, 00:18:40.242 "nvme_error_stat": false, 00:18:40.242 "rdma_srq_size": 0, 00:18:40.242 "io_path_stat": false, 00:18:40.242 "allow_accel_sequence": false, 00:18:40.242 "rdma_max_cq_size": 0, 00:18:40.242 "rdma_cm_event_timeout_ms": 0, 00:18:40.242 "dhchap_digests": [ 00:18:40.242 "sha256", 00:18:40.242 "sha384", 00:18:40.242 "sha512" 00:18:40.242 ], 00:18:40.242 "dhchap_dhgroups": [ 00:18:40.242 "null", 00:18:40.242 "ffdhe2048", 00:18:40.242 "ffdhe3072", 00:18:40.242 "ffdhe4096", 00:18:40.242 "ffdhe6144", 00:18:40.242 "ffdhe8192" 00:18:40.242 ] 00:18:40.242 } 00:18:40.242 }, 00:18:40.242 { 00:18:40.242 "method": "bdev_nvme_set_hotplug", 00:18:40.242 "params": { 00:18:40.242 "period_us": 100000, 00:18:40.242 "enable": false 00:18:40.242 } 00:18:40.242 }, 00:18:40.242 { 00:18:40.242 "method": "bdev_malloc_create", 00:18:40.242 "params": { 00:18:40.242 "name": "malloc0", 00:18:40.242 "num_blocks": 8192, 00:18:40.242 "block_size": 4096, 00:18:40.242 "physical_block_size": 4096, 00:18:40.242 "uuid": "2467a356-0dc4-4742-b091-a4dc819f4f0e", 00:18:40.242 "optimal_io_boundary": 0 00:18:40.242 } 00:18:40.242 }, 00:18:40.242 { 00:18:40.242 "method": "bdev_wait_for_examine" 00:18:40.242 } 00:18:40.242 ] 00:18:40.242 }, 00:18:40.242 { 00:18:40.242 "subsystem": "nbd", 00:18:40.242 "config": [] 00:18:40.242 }, 00:18:40.242 { 00:18:40.242 "subsystem": "scheduler", 00:18:40.242 "config": [ 00:18:40.242 { 00:18:40.242 "method": "framework_set_scheduler", 00:18:40.242 "params": { 00:18:40.242 "name": "static" 00:18:40.242 } 00:18:40.242 } 00:18:40.242 ] 00:18:40.242 }, 00:18:40.242 { 00:18:40.242 "subsystem": "nvmf", 00:18:40.242 "config": [ 00:18:40.242 { 00:18:40.242 "method": "nvmf_set_config", 00:18:40.242 "params": { 00:18:40.242 "discovery_filter": "match_any", 00:18:40.242 "admin_cmd_passthru": { 00:18:40.242 "identify_ctrlr": false 00:18:40.242 } 00:18:40.242 } 00:18:40.242 }, 00:18:40.242 { 00:18:40.242 "method": "nvmf_set_max_subsystems", 00:18:40.243 "params": { 00:18:40.243 "max_subsystems": 1024 00:18:40.243 } 00:18:40.243 }, 00:18:40.243 { 00:18:40.243 "method": "nvmf_set_crdt", 00:18:40.243 "params": { 00:18:40.243 "crdt1": 0, 00:18:40.243 "crdt2": 0, 00:18:40.243 "crdt3": 0 00:18:40.243 } 00:18:40.243 }, 00:18:40.243 { 00:18:40.243 "method": "nvmf_create_transport", 00:18:40.243 "params": { 00:18:40.243 "trtype": "TCP", 00:18:40.243 "max_queue_depth": 128, 00:18:40.243 "max_io_qpairs_per_ctrlr": 127, 00:18:40.243 "in_capsule_data_size": 4096, 00:18:40.243 "max_io_size": 131072, 00:18:40.243 "io_unit_size": 131072, 00:18:40.243 "max_aq_depth": 128, 00:18:40.243 "num_shared_buffers": 511, 00:18:40.243 "buf_cache_size": 4294967295, 00:18:40.243 "dif_insert_or_strip": false, 00:18:40.243 "zcopy": false, 00:18:40.243 "c2h_success": false, 00:18:40.243 "sock_priority": 0, 00:18:40.243 "abort_timeout_sec": 1, 00:18:40.243 "ack_timeout": 0, 00:18:40.243 "data_wr_pool_size": 0 00:18:40.243 } 00:18:40.243 }, 00:18:40.243 { 00:18:40.243 "method": "nvmf_create_subsystem", 00:18:40.243 "params": { 00:18:40.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.243 "allow_any_host": false, 00:18:40.243 "serial_number": "00000000000000000000", 00:18:40.243 "model_number": "SPDK bdev Controller", 00:18:40.243 "max_namespaces": 32, 00:18:40.243 "min_cntlid": 1, 00:18:40.243 "max_cntlid": 65519, 00:18:40.243 "ana_reporting": false 00:18:40.243 } 00:18:40.243 }, 00:18:40.243 { 00:18:40.243 "method": "nvmf_subsystem_add_host", 00:18:40.243 "params": { 00:18:40.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.243 "host": "nqn.2016-06.io.spdk:host1", 00:18:40.243 "psk": "key0" 00:18:40.243 } 00:18:40.243 }, 00:18:40.243 { 00:18:40.243 "method": "nvmf_subsystem_add_ns", 00:18:40.243 "params": { 00:18:40.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.243 "namespace": { 00:18:40.243 "nsid": 1, 00:18:40.243 "bdev_name": "malloc0", 00:18:40.243 "nguid": "2467A3560DC44742B091A4DC819F4F0E", 00:18:40.243 "uuid": "2467a356-0dc4-4742-b091-a4dc819f4f0e", 00:18:40.243 "no_auto_visible": false 00:18:40.243 } 00:18:40.243 } 00:18:40.243 }, 00:18:40.243 { 00:18:40.243 "method": "nvmf_subsystem_add_listener", 00:18:40.243 "params": { 00:18:40.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.243 "listen_address": { 00:18:40.243 "trtype": "TCP", 00:18:40.243 "adrfam": "IPv4", 00:18:40.243 "traddr": "10.0.0.2", 00:18:40.243 "trsvcid": "4420" 00:18:40.243 }, 00:18:40.243 "secure_channel": true 00:18:40.243 } 00:18:40.243 } 00:18:40.243 ] 00:18:40.243 } 00:18:40.243 ] 00:18:40.243 }' 00:18:40.243 21:34:02 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:40.502 21:34:03 -- target/tls.sh@264 -- # bperfcfg='{ 00:18:40.502 "subsystems": [ 00:18:40.502 { 00:18:40.502 "subsystem": "keyring", 00:18:40.502 "config": [ 00:18:40.502 { 00:18:40.502 "method": "keyring_file_add_key", 00:18:40.502 "params": { 00:18:40.502 "name": "key0", 00:18:40.502 "path": "/tmp/tmp.3qQaBjvwJ8" 00:18:40.502 } 00:18:40.502 } 00:18:40.502 ] 00:18:40.502 }, 00:18:40.502 { 00:18:40.502 "subsystem": "iobuf", 00:18:40.502 "config": [ 00:18:40.502 { 00:18:40.502 "method": "iobuf_set_options", 00:18:40.502 "params": { 00:18:40.502 "small_pool_count": 8192, 00:18:40.502 "large_pool_count": 1024, 00:18:40.502 "small_bufsize": 8192, 00:18:40.502 "large_bufsize": 135168 00:18:40.502 } 00:18:40.502 } 00:18:40.502 ] 00:18:40.502 }, 00:18:40.502 { 00:18:40.502 "subsystem": "sock", 00:18:40.502 "config": [ 00:18:40.502 { 00:18:40.502 "method": "sock_impl_set_options", 00:18:40.502 "params": { 00:18:40.502 "impl_name": "posix", 00:18:40.502 "recv_buf_size": 2097152, 00:18:40.502 "send_buf_size": 2097152, 00:18:40.502 "enable_recv_pipe": true, 00:18:40.502 "enable_quickack": false, 00:18:40.502 "enable_placement_id": 0, 00:18:40.502 "enable_zerocopy_send_server": true, 00:18:40.502 "enable_zerocopy_send_client": false, 00:18:40.502 "zerocopy_threshold": 0, 00:18:40.502 "tls_version": 0, 00:18:40.502 "enable_ktls": false 00:18:40.502 } 00:18:40.502 }, 00:18:40.502 { 00:18:40.502 "method": "sock_impl_set_options", 00:18:40.502 "params": { 00:18:40.502 "impl_name": "ssl", 00:18:40.502 "recv_buf_size": 4096, 00:18:40.502 "send_buf_size": 4096, 00:18:40.502 "enable_recv_pipe": true, 00:18:40.502 "enable_quickack": false, 00:18:40.502 "enable_placement_id": 0, 00:18:40.502 "enable_zerocopy_send_server": true, 00:18:40.502 "enable_zerocopy_send_client": false, 00:18:40.502 "zerocopy_threshold": 0, 00:18:40.502 "tls_version": 0, 00:18:40.502 "enable_ktls": false 00:18:40.502 } 00:18:40.502 } 00:18:40.502 ] 00:18:40.502 }, 00:18:40.502 { 00:18:40.502 "subsystem": "vmd", 00:18:40.502 "config": [] 00:18:40.502 }, 00:18:40.502 { 00:18:40.502 "subsystem": "accel", 00:18:40.502 "config": [ 00:18:40.502 { 00:18:40.502 "method": "accel_set_options", 00:18:40.502 "params": { 00:18:40.502 "small_cache_size": 128, 00:18:40.502 "large_cache_size": 16, 00:18:40.502 "task_count": 2048, 00:18:40.502 "sequence_count": 2048, 00:18:40.502 "buf_count": 2048 00:18:40.502 } 00:18:40.502 } 00:18:40.502 ] 00:18:40.502 }, 00:18:40.502 { 00:18:40.502 "subsystem": "bdev", 00:18:40.502 "config": [ 00:18:40.502 { 00:18:40.502 "method": "bdev_set_options", 00:18:40.502 "params": { 00:18:40.502 "bdev_io_pool_size": 65535, 00:18:40.502 "bdev_io_cache_size": 256, 00:18:40.502 "bdev_auto_examine": true, 00:18:40.502 "iobuf_small_cache_size": 128, 00:18:40.502 "iobuf_large_cache_size": 16 00:18:40.502 } 00:18:40.502 }, 00:18:40.502 { 00:18:40.502 "method": "bdev_raid_set_options", 00:18:40.502 "params": { 00:18:40.502 "process_window_size_kb": 1024 00:18:40.502 } 00:18:40.502 }, 00:18:40.502 { 00:18:40.502 "method": "bdev_iscsi_set_options", 00:18:40.502 "params": { 00:18:40.502 "timeout_sec": 30 00:18:40.502 } 00:18:40.502 }, 00:18:40.502 { 00:18:40.502 "method": "bdev_nvme_set_options", 00:18:40.503 "params": { 00:18:40.503 "action_on_timeout": "none", 00:18:40.503 "timeout_us": 0, 00:18:40.503 "timeout_admin_us": 0, 00:18:40.503 "keep_alive_timeout_ms": 10000, 00:18:40.503 "arbitration_burst": 0, 00:18:40.503 "low_priority_weight": 0, 00:18:40.503 "medium_priority_weight": 0, 00:18:40.503 "high_priority_weight": 0, 00:18:40.503 "nvme_adminq_poll_period_us": 10000, 00:18:40.503 "nvme_ioq_poll_period_us": 0, 00:18:40.503 "io_queue_requests": 512, 00:18:40.503 "delay_cmd_submit": true, 00:18:40.503 "transport_retry_count": 4, 00:18:40.503 "bdev_retry_count": 3, 00:18:40.503 "transport_ack_timeout": 0, 00:18:40.503 "ctrlr_loss_timeout_sec": 0, 00:18:40.503 "reconnect_delay_sec": 0, 00:18:40.503 "fast_io_fail_timeout_sec": 0, 00:18:40.503 "disable_auto_failback": false, 00:18:40.503 "generate_uuids": false, 00:18:40.503 "transport_tos": 0, 00:18:40.503 "nvme_error_stat": false, 00:18:40.503 "rdma_srq_size": 0, 00:18:40.503 "io_path_stat": false, 00:18:40.503 "allow_accel_sequence": false, 00:18:40.503 "rdma_max_cq_size": 0, 00:18:40.503 "rdma_cm_event_timeout_ms": 0, 00:18:40.503 "dhchap_digests": [ 00:18:40.503 "sha256", 00:18:40.503 "sha384", 00:18:40.503 "sha512" 00:18:40.503 ], 00:18:40.503 "dhchap_dhgroups": [ 00:18:40.503 "null", 00:18:40.503 "ffdhe2048", 00:18:40.503 "ffdhe3072", 00:18:40.503 "ffdhe4096", 00:18:40.503 "ffdhe6144", 00:18:40.503 "ffdhe8192" 00:18:40.503 ] 00:18:40.503 } 00:18:40.503 }, 00:18:40.503 { 00:18:40.503 "method": "bdev_nvme_attach_controller", 00:18:40.503 "params": { 00:18:40.503 "name": "nvme0", 00:18:40.503 "trtype": "TCP", 00:18:40.503 "adrfam": "IPv4", 00:18:40.503 "traddr": "10.0.0.2", 00:18:40.503 "trsvcid": "4420", 00:18:40.503 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.503 "prchk_reftag": false, 00:18:40.503 "prchk_guard": false, 00:18:40.503 "ctrlr_loss_timeout_sec": 0, 00:18:40.503 "reconnect_delay_sec": 0, 00:18:40.503 "fast_io_fail_timeout_sec": 0, 00:18:40.503 "psk": "key0", 00:18:40.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:40.503 "hdgst": false, 00:18:40.503 "ddgst": false 00:18:40.503 } 00:18:40.503 }, 00:18:40.503 { 00:18:40.503 "method": "bdev_nvme_set_hotplug", 00:18:40.503 "params": { 00:18:40.503 "period_us": 100000, 00:18:40.503 "enable": false 00:18:40.503 } 00:18:40.503 }, 00:18:40.503 { 00:18:40.503 "method": "bdev_enable_histogram", 00:18:40.503 "params": { 00:18:40.503 "name": "nvme0n1", 00:18:40.503 "enable": true 00:18:40.503 } 00:18:40.503 }, 00:18:40.503 { 00:18:40.503 "method": "bdev_wait_for_examine" 00:18:40.503 } 00:18:40.503 ] 00:18:40.503 }, 00:18:40.503 { 00:18:40.503 "subsystem": "nbd", 00:18:40.503 "config": [] 00:18:40.503 } 00:18:40.503 ] 00:18:40.503 }' 00:18:40.503 21:34:03 -- target/tls.sh@266 -- # killprocess 2882539 00:18:40.503 21:34:03 -- common/autotest_common.sh@936 -- # '[' -z 2882539 ']' 00:18:40.503 21:34:03 -- common/autotest_common.sh@940 -- # kill -0 2882539 00:18:40.503 21:34:03 -- common/autotest_common.sh@941 -- # uname 00:18:40.503 21:34:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:40.503 21:34:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2882539 00:18:40.503 21:34:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:40.503 21:34:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:40.503 21:34:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2882539' 00:18:40.503 killing process with pid 2882539 00:18:40.503 21:34:03 -- common/autotest_common.sh@955 -- # kill 2882539 00:18:40.503 Received shutdown signal, test time was about 1.000000 seconds 00:18:40.503 00:18:40.503 Latency(us) 00:18:40.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.503 =================================================================================================================== 00:18:40.503 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.503 21:34:03 -- common/autotest_common.sh@960 -- # wait 2882539 00:18:40.762 21:34:03 -- target/tls.sh@267 -- # killprocess 2882441 00:18:40.762 21:34:03 -- common/autotest_common.sh@936 -- # '[' -z 2882441 ']' 00:18:40.762 21:34:03 -- common/autotest_common.sh@940 -- # kill -0 2882441 00:18:40.762 21:34:03 -- common/autotest_common.sh@941 -- # uname 00:18:40.762 21:34:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:40.762 21:34:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2882441 00:18:40.762 21:34:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:40.762 21:34:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:40.762 21:34:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2882441' 00:18:40.762 killing process with pid 2882441 00:18:40.762 21:34:03 -- common/autotest_common.sh@955 -- # kill 2882441 00:18:40.762 21:34:03 -- common/autotest_common.sh@960 -- # wait 2882441 00:18:41.022 21:34:03 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:41.022 21:34:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:41.022 21:34:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:41.022 21:34:03 -- target/tls.sh@269 -- # echo '{ 00:18:41.022 "subsystems": [ 00:18:41.022 { 00:18:41.022 "subsystem": "keyring", 00:18:41.022 "config": [ 00:18:41.022 { 00:18:41.022 "method": "keyring_file_add_key", 00:18:41.022 "params": { 00:18:41.022 "name": "key0", 00:18:41.022 "path": "/tmp/tmp.3qQaBjvwJ8" 00:18:41.022 } 00:18:41.022 } 00:18:41.022 ] 00:18:41.022 }, 00:18:41.022 { 00:18:41.022 "subsystem": "iobuf", 00:18:41.022 "config": [ 00:18:41.022 { 00:18:41.022 "method": "iobuf_set_options", 00:18:41.022 "params": { 00:18:41.022 "small_pool_count": 8192, 00:18:41.022 "large_pool_count": 1024, 00:18:41.022 "small_bufsize": 8192, 00:18:41.022 "large_bufsize": 135168 00:18:41.022 } 00:18:41.022 } 00:18:41.022 ] 00:18:41.022 }, 00:18:41.022 { 00:18:41.022 "subsystem": "sock", 00:18:41.022 "config": [ 00:18:41.022 { 00:18:41.022 "method": "sock_impl_set_options", 00:18:41.022 "params": { 00:18:41.022 "impl_name": "posix", 00:18:41.022 "recv_buf_size": 2097152, 00:18:41.022 "send_buf_size": 2097152, 00:18:41.022 "enable_recv_pipe": true, 00:18:41.022 "enable_quickack": false, 00:18:41.022 "enable_placement_id": 0, 00:18:41.022 "enable_zerocopy_send_server": true, 00:18:41.022 "enable_zerocopy_send_client": false, 00:18:41.022 "zerocopy_threshold": 0, 00:18:41.022 "tls_version": 0, 00:18:41.022 "enable_ktls": false 00:18:41.022 } 00:18:41.022 }, 00:18:41.022 { 00:18:41.022 "method": "sock_impl_set_options", 00:18:41.022 "params": { 00:18:41.022 "impl_name": "ssl", 00:18:41.022 "recv_buf_size": 4096, 00:18:41.022 "send_buf_size": 4096, 00:18:41.022 "enable_recv_pipe": true, 00:18:41.022 "enable_quickack": false, 00:18:41.022 "enable_placement_id": 0, 00:18:41.022 "enable_zerocopy_send_server": true, 00:18:41.022 "enable_zerocopy_send_client": false, 00:18:41.022 "zerocopy_threshold": 0, 00:18:41.022 "tls_version": 0, 00:18:41.022 "enable_ktls": false 00:18:41.022 } 00:18:41.022 } 00:18:41.022 ] 00:18:41.022 }, 00:18:41.022 { 00:18:41.022 "subsystem": "vmd", 00:18:41.022 "config": [] 00:18:41.022 }, 00:18:41.022 { 00:18:41.022 "subsystem": "accel", 00:18:41.022 "config": [ 00:18:41.022 { 00:18:41.022 "method": "accel_set_options", 00:18:41.022 "params": { 00:18:41.022 "small_cache_size": 128, 00:18:41.022 "large_cache_size": 16, 00:18:41.022 "task_count": 2048, 00:18:41.022 "sequence_count": 2048, 00:18:41.022 "buf_count": 2048 00:18:41.022 } 00:18:41.022 } 00:18:41.022 ] 00:18:41.022 }, 00:18:41.022 { 00:18:41.022 "subsystem": "bdev", 00:18:41.022 "config": [ 00:18:41.022 { 00:18:41.022 "method": "bdev_set_options", 00:18:41.022 "params": { 00:18:41.022 "bdev_io_pool_size": 65535, 00:18:41.022 "bdev_io_cache_size": 256, 00:18:41.022 "bdev_auto_examine": true, 00:18:41.022 "iobuf_small_cache_size": 128, 00:18:41.022 "iobuf_large_cache_size": 16 00:18:41.022 } 00:18:41.022 }, 00:18:41.022 { 00:18:41.022 "method": "bdev_raid_set_options", 00:18:41.022 "params": { 00:18:41.022 "process_window_size_kb": 1024 00:18:41.022 } 00:18:41.022 }, 00:18:41.022 { 00:18:41.022 "method": "bdev_iscsi_set_options", 00:18:41.022 "params": { 00:18:41.022 "timeout_sec": 30 00:18:41.022 } 00:18:41.022 }, 00:18:41.022 { 00:18:41.022 "method": "bdev_nvme_set_options", 00:18:41.022 "params": { 00:18:41.022 "action_on_timeout": "none", 00:18:41.022 "timeout_us": 0, 00:18:41.022 "timeout_admin_us": 0, 00:18:41.022 "keep_alive_timeout_ms": 10000, 00:18:41.022 "arbitration_burst": 0, 00:18:41.022 "low_priority_weight": 0, 00:18:41.022 "medium_priority_weight": 0, 00:18:41.022 "high_priority_weight": 0, 00:18:41.022 "nvme_adminq_poll_period_us": 10000, 00:18:41.022 "nvme_ioq_poll_period_us": 0, 00:18:41.022 "io_queue_requests": 0, 00:18:41.022 "delay_cmd_submit": true, 00:18:41.022 "transport_retry_count": 4, 00:18:41.022 "bdev_retry_count": 3, 00:18:41.022 "transport_ack_timeout": 0, 00:18:41.022 "ctrlr_loss_timeout_sec": 0, 00:18:41.022 "reconnect_delay_sec": 0, 00:18:41.022 "fast_io_fail_timeout_sec": 0, 00:18:41.022 "disable_auto_failback": false, 00:18:41.022 "generate_uuids": false, 00:18:41.022 "transport_tos": 0, 00:18:41.022 "nvme_error_stat": false, 00:18:41.022 "rdma_srq_size": 0, 00:18:41.022 "io_path_stat": false, 00:18:41.022 "allow_accel_sequence": false, 00:18:41.022 "rdma_max_cq_size": 0, 00:18:41.022 "rdma_cm_event_timeout_ms": 0, 00:18:41.022 "dhchap_digests": [ 00:18:41.022 "sha256", 00:18:41.022 "sha384", 00:18:41.022 "sha512" 00:18:41.022 ], 00:18:41.022 "dhchap_dhgroups": [ 00:18:41.022 "null", 00:18:41.022 "ffdhe2048", 00:18:41.022 "ffdhe3072", 00:18:41.022 "ffdhe4096", 00:18:41.022 "ffdhe6144", 00:18:41.022 "ffdhe8192" 00:18:41.022 ] 00:18:41.022 } 00:18:41.022 }, 00:18:41.022 { 00:18:41.022 "method": "bdev_nvme_set_hotplug", 00:18:41.022 "params": { 00:18:41.022 "period_us": 100000, 00:18:41.022 "enable": false 00:18:41.022 } 00:18:41.023 }, 00:18:41.023 { 00:18:41.023 "method": "bdev_malloc_create", 00:18:41.023 "params": { 00:18:41.023 "name": "malloc0", 00:18:41.023 "num_blocks": 8192, 00:18:41.023 "block_size": 4096, 00:18:41.023 "physical_block_size": 4096, 00:18:41.023 "uuid": "2467a356-0dc4-4742-b091-a4dc819f4f0e", 00:18:41.023 "optimal_io_boundary": 0 00:18:41.023 } 00:18:41.023 }, 00:18:41.023 { 00:18:41.023 "method": "bdev_wait_for_examine" 00:18:41.023 } 00:18:41.023 ] 00:18:41.023 }, 00:18:41.023 { 00:18:41.023 "subsystem": "nbd", 00:18:41.023 "config": [] 00:18:41.023 }, 00:18:41.023 { 00:18:41.023 "subsystem": "scheduler", 00:18:41.023 "config": [ 00:18:41.023 { 00:18:41.023 "method": "framework_set_scheduler", 00:18:41.023 "params": { 00:18:41.023 "name": "static" 00:18:41.023 } 00:18:41.023 } 00:18:41.023 ] 00:18:41.023 }, 00:18:41.023 { 00:18:41.023 "subsystem": "nvmf", 00:18:41.023 "config": [ 00:18:41.023 { 00:18:41.023 "method": "nvmf_set_config", 00:18:41.023 "params": { 00:18:41.023 "discovery_filter": "match_any", 00:18:41.023 "admin_cmd_passthru": { 00:18:41.023 "identify_ctrlr": false 00:18:41.023 } 00:18:41.023 } 00:18:41.023 }, 00:18:41.023 { 00:18:41.023 "method": "nvmf_set_max_subsystems", 00:18:41.023 "params": { 00:18:41.023 "max_subsystems": 1024 00:18:41.023 } 00:18:41.023 }, 00:18:41.023 { 00:18:41.023 "method": "nvmf_set_crdt", 00:18:41.023 "params": { 00:18:41.023 "crdt1": 0, 00:18:41.023 "crdt2": 0, 00:18:41.023 "crdt3": 0 00:18:41.023 } 00:18:41.023 }, 00:18:41.023 { 00:18:41.023 "method": "nvmf_create_transport", 00:18:41.023 "params": { 00:18:41.023 "trtype": "TCP", 00:18:41.023 "max_queue_depth": 128, 00:18:41.023 "max_io_qpairs_per_ctrlr": 127, 00:18:41.023 "in_capsule_data_size": 4096, 00:18:41.023 "max_io_size": 131072, 00:18:41.023 "io_unit_size": 131072, 00:18:41.023 "max_aq_depth": 128, 00:18:41.023 "num_shared_buffers": 511, 00:18:41.023 "buf_cache_size": 4294967295, 00:18:41.023 "dif_insert_or_strip": false, 00:18:41.023 "zcopy": false, 00:18:41.023 "c2h_success": false, 00:18:41.023 "sock_priority": 0, 00:18:41.023 "abort_timeout_sec": 1, 00:18:41.023 "ack_timeout": 0, 00:18:41.023 "data_wr_pool_size": 0 00:18:41.023 } 00:18:41.023 }, 00:18:41.023 { 00:18:41.023 "method": "nvmf_create_subsystem", 00:18:41.023 "params": { 00:18:41.023 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.023 "allow_any_host": false, 00:18:41.023 "serial_number": "00000000000000000000", 00:18:41.023 "model_number": "SPDK bdev Controller", 00:18:41.023 "max_namespaces": 32, 00:18:41.023 "min_cntlid": 1, 00:18:41.023 "max_cntlid": 65519, 00:18:41.023 "ana_reporting": false 00:18:41.023 } 00:18:41.023 }, 00:18:41.023 { 00:18:41.023 "method": "nvmf_subsystem_add_host", 00:18:41.023 "params": { 00:18:41.023 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.023 "host": "nqn.2016-06.io.spdk:host1", 00:18:41.023 "psk": "key0" 00:18:41.023 } 00:18:41.023 }, 00:18:41.023 { 00:18:41.023 "method": "nvmf_subsystem_add_ns", 00:18:41.023 "params": { 00:18:41.023 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.023 "namespace": { 00:18:41.023 "nsid": 1, 00:18:41.023 "bdev_name": "malloc0", 00:18:41.023 "nguid": "2467A3560DC44742B091A4DC819F4F0E", 00:18:41.023 "uuid": "2467a356-0dc4-4742-b091-a4dc819f4f0e", 00:18:41.023 "no_auto_visible": false 00:18:41.023 } 00:18:41.023 } 00:18:41.023 }, 00:18:41.023 { 00:18:41.023 "method": "nvmf_subsystem_add_listener", 00:18:41.023 "params": { 00:18:41.023 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.023 "listen_address": { 00:18:41.023 "trtype": "TCP", 00:18:41.023 "adrfam": "IPv4", 00:18:41.023 "traddr": "10.0.0.2", 00:18:41.023 "trsvcid": "4420" 00:18:41.023 }, 00:18:41.023 "secure_channel": true 00:18:41.023 } 00:18:41.023 } 00:18:41.023 ] 00:18:41.023 } 00:18:41.023 ] 00:18:41.023 }' 00:18:41.023 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:18:41.023 21:34:03 -- nvmf/common.sh@470 -- # nvmfpid=2883178 00:18:41.023 21:34:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:41.023 21:34:03 -- nvmf/common.sh@471 -- # waitforlisten 2883178 00:18:41.023 21:34:03 -- common/autotest_common.sh@817 -- # '[' -z 2883178 ']' 00:18:41.023 21:34:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.023 21:34:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:41.023 21:34:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.023 21:34:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:41.023 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:18:41.023 [2024-04-24 21:34:03.717496] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:18:41.023 [2024-04-24 21:34:03.717545] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.023 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.023 [2024-04-24 21:34:03.788947] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.023 [2024-04-24 21:34:03.860678] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.023 [2024-04-24 21:34:03.860714] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.023 [2024-04-24 21:34:03.860723] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.023 [2024-04-24 21:34:03.860731] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.023 [2024-04-24 21:34:03.860754] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.023 [2024-04-24 21:34:03.860814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.282 [2024-04-24 21:34:04.062754] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.282 [2024-04-24 21:34:04.094781] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:41.282 [2024-04-24 21:34:04.105834] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.849 21:34:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:41.849 21:34:04 -- common/autotest_common.sh@850 -- # return 0 00:18:41.849 21:34:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:41.849 21:34:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:41.850 21:34:04 -- common/autotest_common.sh@10 -- # set +x 00:18:41.850 21:34:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.850 21:34:04 -- target/tls.sh@272 -- # bdevperf_pid=2883308 00:18:41.850 21:34:04 -- target/tls.sh@273 -- # waitforlisten 2883308 /var/tmp/bdevperf.sock 00:18:41.850 21:34:04 -- common/autotest_common.sh@817 -- # '[' -z 2883308 ']' 00:18:41.850 21:34:04 -- target/tls.sh@270 -- # echo '{ 00:18:41.850 "subsystems": [ 00:18:41.850 { 00:18:41.850 "subsystem": "keyring", 00:18:41.850 "config": [ 00:18:41.850 { 00:18:41.850 "method": "keyring_file_add_key", 00:18:41.850 "params": { 00:18:41.850 "name": "key0", 00:18:41.850 "path": "/tmp/tmp.3qQaBjvwJ8" 00:18:41.850 } 00:18:41.850 } 00:18:41.850 ] 00:18:41.850 }, 00:18:41.850 { 00:18:41.850 "subsystem": "iobuf", 00:18:41.850 "config": [ 00:18:41.850 { 00:18:41.850 "method": "iobuf_set_options", 00:18:41.850 "params": { 00:18:41.850 "small_pool_count": 8192, 00:18:41.850 "large_pool_count": 1024, 00:18:41.850 "small_bufsize": 8192, 00:18:41.850 "large_bufsize": 135168 00:18:41.850 } 00:18:41.850 } 00:18:41.850 ] 00:18:41.850 }, 00:18:41.850 { 00:18:41.850 "subsystem": "sock", 00:18:41.850 "config": [ 00:18:41.850 { 00:18:41.850 "method": "sock_impl_set_options", 00:18:41.850 "params": { 00:18:41.850 "impl_name": "posix", 00:18:41.850 "recv_buf_size": 2097152, 00:18:41.850 "send_buf_size": 2097152, 00:18:41.850 "enable_recv_pipe": true, 00:18:41.850 "enable_quickack": false, 00:18:41.850 "enable_placement_id": 0, 00:18:41.850 "enable_zerocopy_send_server": true, 00:18:41.850 "enable_zerocopy_send_client": false, 00:18:41.850 "zerocopy_threshold": 0, 00:18:41.850 "tls_version": 0, 00:18:41.850 "enable_ktls": false 00:18:41.850 } 00:18:41.850 }, 00:18:41.850 { 00:18:41.850 "method": "sock_impl_set_options", 00:18:41.850 "params": { 00:18:41.850 "impl_name": "ssl", 00:18:41.850 "recv_buf_size": 4096, 00:18:41.850 "send_buf_size": 4096, 00:18:41.850 "enable_recv_pipe": true, 00:18:41.850 "enable_quickack": false, 00:18:41.850 "enable_placement_id": 0, 00:18:41.850 "enable_zerocopy_send_server": true, 00:18:41.850 "enable_zerocopy_send_client": false, 00:18:41.850 "zerocopy_threshold": 0, 00:18:41.850 "tls_version": 0, 00:18:41.850 "enable_ktls": false 00:18:41.850 } 00:18:41.850 } 00:18:41.850 ] 00:18:41.850 }, 00:18:41.850 { 00:18:41.850 "subsystem": "vmd", 00:18:41.850 "config": [] 00:18:41.850 }, 00:18:41.850 { 00:18:41.850 "subsystem": "accel", 00:18:41.850 "config": [ 00:18:41.850 { 00:18:41.850 "method": "accel_set_options", 00:18:41.850 "params": { 00:18:41.850 "small_cache_size": 128, 00:18:41.850 "large_cache_size": 16, 00:18:41.850 "task_count": 2048, 00:18:41.850 "sequence_count": 2048, 00:18:41.850 "buf_count": 2048 00:18:41.850 } 00:18:41.850 } 00:18:41.850 ] 00:18:41.850 }, 00:18:41.850 { 00:18:41.850 "subsystem": "bdev", 00:18:41.850 "config": [ 00:18:41.850 { 00:18:41.850 "method": "bdev_set_options", 00:18:41.850 "params": { 00:18:41.850 "bdev_io_pool_size": 65535, 00:18:41.850 "bdev_io_cache_size": 256, 00:18:41.850 "bdev_auto_examine": true, 00:18:41.850 "iobuf_small_cache_size": 128, 00:18:41.850 "iobuf_large_cache_size": 16 00:18:41.850 } 00:18:41.850 }, 00:18:41.850 { 00:18:41.850 "method": "bdev_raid_set_options", 00:18:41.850 "params": { 00:18:41.850 "process_window_size_kb": 1024 00:18:41.850 } 00:18:41.850 }, 00:18:41.850 { 00:18:41.850 "method": "bdev_iscsi_set_options", 00:18:41.850 "params": { 00:18:41.850 "timeout_sec": 30 00:18:41.850 } 00:18:41.850 }, 00:18:41.850 { 00:18:41.850 "method": "bdev_nvme_set_options", 00:18:41.850 "params": { 00:18:41.850 "action_on_timeout": "none", 00:18:41.850 "timeout_us": 0, 00:18:41.850 "timeout_admin_us": 0, 00:18:41.850 "keep_alive_timeout_ms": 10000, 00:18:41.850 "arbitration_burst": 0, 00:18:41.850 "low_priority_weight": 0, 00:18:41.850 "medium_priority_weight": 0, 00:18:41.850 "high_priority_weight": 0, 00:18:41.850 "nvme_adminq_poll_period_us": 10000, 00:18:41.850 "nvme_ioq_poll_period_us": 0, 00:18:41.850 "io_queue_requests": 512, 00:18:41.850 "delay_cmd_submit": true, 00:18:41.850 "transport_retry_count": 4, 00:18:41.850 "bdev_retry_count": 3, 00:18:41.850 "transport_ack_timeout": 0, 00:18:41.850 "ctrlr_loss_timeout_sec": 0, 00:18:41.850 "reconnect_delay_sec": 0, 00:18:41.850 "fast_io_fail_timeout_sec": 0, 00:18:41.850 "disable_auto_failback": false, 00:18:41.850 "generate_uuids": false, 00:18:41.850 "transport_tos": 0, 00:18:41.850 "nvme_error_stat": false, 00:18:41.850 "rdma_srq_size": 0, 00:18:41.850 "io_path_stat": false, 00:18:41.850 "allow_accel_sequence": false, 00:18:41.850 "rdma_max_cq_size": 0, 00:18:41.850 "rdma_cm_event_timeout_ms": 0, 00:18:41.850 "dhchap_digests": [ 00:18:41.850 "sha256", 00:18:41.850 "sha384", 00:18:41.850 "sha512" 00:18:41.850 ], 00:18:41.850 "dhchap_dhgroups": [ 00:18:41.850 "null", 00:18:41.850 "ffdhe2048", 00:18:41.850 "ffdhe3072", 00:18:41.850 "ffdhe4096", 00:18:41.850 "ffdhe6144", 00:18:41.850 "ffdhe8192" 00:18:41.850 ] 00:18:41.850 } 00:18:41.850 }, 00:18:41.850 { 00:18:41.850 "method": "bdev_nvme_attach_controller", 00:18:41.850 "params": { 00:18:41.850 "name": "nvme0", 00:18:41.850 "trtype": "TCP", 00:18:41.850 "adrfam": "IPv4", 00:18:41.850 "traddr": "10.0.0.2", 00:18:41.850 "trsvcid": "4420", 00:18:41.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.850 "prchk_reftag": false, 00:18:41.850 "prchk_guard": false, 00:18:41.850 "ctrlr_loss_timeout_sec": 0, 00:18:41.850 "reconnect_delay_sec": 0, 00:18:41.850 "fast_io_fail_timeout_sec": 0, 00:18:41.850 "psk": "key0", 00:18:41.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.850 "hdgst": false, 00:18:41.850 "ddgst": false 00:18:41.850 } 00:18:41.850 }, 00:18:41.850 { 00:18:41.850 "method": "bdev_nvme_set_hotplug", 00:18:41.850 "params": { 00:18:41.850 "period_us": 100000, 00:18:41.850 "enable": false 00:18:41.850 } 00:18:41.850 }, 00:18:41.850 { 00:18:41.850 "method": "bdev_enable_histogram", 00:18:41.850 "params": { 00:18:41.850 "name": "nvme0n1", 00:18:41.850 "enable": true 00:18:41.850 } 00:18:41.850 }, 00:18:41.850 { 00:18:41.850 "method": "bdev_wait_for_examine" 00:18:41.850 } 00:18:41.850 ] 00:18:41.850 }, 00:18:41.850 { 00:18:41.850 "subsystem": "nbd", 00:18:41.850 "config": [] 00:18:41.850 } 00:18:41.850 ] 00:18:41.850 }' 00:18:41.850 21:34:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.850 21:34:04 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:41.850 21:34:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:41.850 21:34:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.850 21:34:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:41.850 21:34:04 -- common/autotest_common.sh@10 -- # set +x 00:18:41.850 [2024-04-24 21:34:04.602124] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:18:41.850 [2024-04-24 21:34:04.602177] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883308 ] 00:18:41.850 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.850 [2024-04-24 21:34:04.670920] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.108 [2024-04-24 21:34:04.740025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.108 [2024-04-24 21:34:04.881480] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:42.675 21:34:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:42.675 21:34:05 -- common/autotest_common.sh@850 -- # return 0 00:18:42.675 21:34:05 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:42.675 21:34:05 -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:42.933 21:34:05 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.933 21:34:05 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:42.933 Running I/O for 1 seconds... 00:18:43.871 00:18:43.872 Latency(us) 00:18:43.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.872 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:43.872 Verification LBA range: start 0x0 length 0x2000 00:18:43.872 nvme0n1 : 1.06 1402.07 5.48 0.00 0.00 89263.40 6632.24 151833.80 00:18:43.872 =================================================================================================================== 00:18:43.872 Total : 1402.07 5.48 0.00 0.00 89263.40 6632.24 151833.80 00:18:43.872 0 00:18:43.872 21:34:06 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:43.872 21:34:06 -- target/tls.sh@279 -- # cleanup 00:18:43.872 21:34:06 -- target/tls.sh@15 -- # process_shm --id 0 00:18:43.872 21:34:06 -- common/autotest_common.sh@794 -- # type=--id 00:18:43.872 21:34:06 -- common/autotest_common.sh@795 -- # id=0 00:18:43.872 21:34:06 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:18:43.872 21:34:06 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:43.872 21:34:06 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:18:43.872 21:34:06 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:18:43.872 21:34:06 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:18:43.872 21:34:06 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:43.872 nvmf_trace.0 00:18:44.131 21:34:06 -- common/autotest_common.sh@809 -- # return 0 00:18:44.131 21:34:06 -- target/tls.sh@16 -- # killprocess 2883308 00:18:44.131 21:34:06 -- common/autotest_common.sh@936 -- # '[' -z 2883308 ']' 00:18:44.131 21:34:06 -- common/autotest_common.sh@940 -- # kill -0 2883308 00:18:44.131 21:34:06 -- common/autotest_common.sh@941 -- # uname 00:18:44.131 21:34:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:44.131 21:34:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2883308 00:18:44.131 21:34:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:44.131 21:34:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:44.131 21:34:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2883308' 00:18:44.131 killing process with pid 2883308 00:18:44.131 21:34:06 -- common/autotest_common.sh@955 -- # kill 2883308 00:18:44.131 Received shutdown signal, test time was about 1.000000 seconds 00:18:44.131 00:18:44.131 Latency(us) 00:18:44.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.131 =================================================================================================================== 00:18:44.131 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:44.131 21:34:06 -- common/autotest_common.sh@960 -- # wait 2883308 00:18:44.391 21:34:07 -- target/tls.sh@17 -- # nvmftestfini 00:18:44.391 21:34:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:44.391 21:34:07 -- nvmf/common.sh@117 -- # sync 00:18:44.391 21:34:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:44.391 21:34:07 -- nvmf/common.sh@120 -- # set +e 00:18:44.391 21:34:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:44.391 21:34:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:44.391 rmmod nvme_tcp 00:18:44.391 rmmod nvme_fabrics 00:18:44.391 rmmod nvme_keyring 00:18:44.391 21:34:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:44.391 21:34:07 -- nvmf/common.sh@124 -- # set -e 00:18:44.391 21:34:07 -- nvmf/common.sh@125 -- # return 0 00:18:44.391 21:34:07 -- nvmf/common.sh@478 -- # '[' -n 2883178 ']' 00:18:44.391 21:34:07 -- nvmf/common.sh@479 -- # killprocess 2883178 00:18:44.391 21:34:07 -- common/autotest_common.sh@936 -- # '[' -z 2883178 ']' 00:18:44.391 21:34:07 -- common/autotest_common.sh@940 -- # kill -0 2883178 00:18:44.391 21:34:07 -- common/autotest_common.sh@941 -- # uname 00:18:44.391 21:34:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:44.391 21:34:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2883178 00:18:44.391 21:34:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:44.391 21:34:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:44.391 21:34:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2883178' 00:18:44.391 killing process with pid 2883178 00:18:44.391 21:34:07 -- common/autotest_common.sh@955 -- # kill 2883178 00:18:44.391 21:34:07 -- common/autotest_common.sh@960 -- # wait 2883178 00:18:44.650 21:34:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:44.650 21:34:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:44.650 21:34:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:44.650 21:34:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.650 21:34:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:44.650 21:34:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.651 21:34:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.651 21:34:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.190 21:34:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:47.190 21:34:09 -- target/tls.sh@18 -- # rm -f /tmp/tmp.a5Ur20dpC0 /tmp/tmp.3BXhS9CNRs /tmp/tmp.3qQaBjvwJ8 00:18:47.190 00:18:47.190 real 1m26.664s 00:18:47.190 user 2m8.800s 00:18:47.190 sys 0m33.382s 00:18:47.190 21:34:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:47.190 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:18:47.190 ************************************ 00:18:47.190 END TEST nvmf_tls 00:18:47.190 ************************************ 00:18:47.190 21:34:09 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:47.190 21:34:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:47.190 21:34:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:47.190 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:18:47.190 ************************************ 00:18:47.190 START TEST nvmf_fips 00:18:47.190 ************************************ 00:18:47.190 21:34:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:47.190 * Looking for test storage... 00:18:47.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:47.190 21:34:09 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.190 21:34:09 -- nvmf/common.sh@7 -- # uname -s 00:18:47.190 21:34:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.190 21:34:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.190 21:34:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.190 21:34:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.190 21:34:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.190 21:34:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.190 21:34:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.190 21:34:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.190 21:34:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.190 21:34:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.191 21:34:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:47.191 21:34:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:47.191 21:34:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.191 21:34:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.191 21:34:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.191 21:34:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.191 21:34:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:47.191 21:34:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.191 21:34:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.191 21:34:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.191 21:34:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.191 21:34:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.191 21:34:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.191 21:34:09 -- paths/export.sh@5 -- # export PATH 00:18:47.191 21:34:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.191 21:34:09 -- nvmf/common.sh@47 -- # : 0 00:18:47.191 21:34:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:47.191 21:34:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:47.191 21:34:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.191 21:34:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.191 21:34:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.191 21:34:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:47.191 21:34:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:47.191 21:34:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:47.191 21:34:09 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:47.191 21:34:09 -- fips/fips.sh@89 -- # check_openssl_version 00:18:47.191 21:34:09 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:47.191 21:34:09 -- fips/fips.sh@85 -- # openssl version 00:18:47.191 21:34:09 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:47.191 21:34:09 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:47.191 21:34:09 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:47.191 21:34:09 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:47.191 21:34:09 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:47.191 21:34:09 -- scripts/common.sh@333 -- # IFS=.-: 00:18:47.191 21:34:09 -- scripts/common.sh@333 -- # read -ra ver1 00:18:47.191 21:34:09 -- scripts/common.sh@334 -- # IFS=.-: 00:18:47.191 21:34:09 -- scripts/common.sh@334 -- # read -ra ver2 00:18:47.191 21:34:09 -- scripts/common.sh@335 -- # local 'op=>=' 00:18:47.191 21:34:09 -- scripts/common.sh@337 -- # ver1_l=3 00:18:47.191 21:34:09 -- scripts/common.sh@338 -- # ver2_l=3 00:18:47.191 21:34:09 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:47.191 21:34:09 -- scripts/common.sh@341 -- # case "$op" in 00:18:47.191 21:34:09 -- scripts/common.sh@345 -- # : 1 00:18:47.191 21:34:09 -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:47.191 21:34:09 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.191 21:34:09 -- scripts/common.sh@362 -- # decimal 3 00:18:47.191 21:34:09 -- scripts/common.sh@350 -- # local d=3 00:18:47.191 21:34:09 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:47.191 21:34:09 -- scripts/common.sh@352 -- # echo 3 00:18:47.191 21:34:09 -- scripts/common.sh@362 -- # ver1[v]=3 00:18:47.191 21:34:09 -- scripts/common.sh@363 -- # decimal 3 00:18:47.191 21:34:09 -- scripts/common.sh@350 -- # local d=3 00:18:47.191 21:34:09 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:47.191 21:34:09 -- scripts/common.sh@352 -- # echo 3 00:18:47.191 21:34:09 -- scripts/common.sh@363 -- # ver2[v]=3 00:18:47.191 21:34:09 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:47.191 21:34:09 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:47.191 21:34:09 -- scripts/common.sh@361 -- # (( v++ )) 00:18:47.191 21:34:09 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.191 21:34:09 -- scripts/common.sh@362 -- # decimal 0 00:18:47.191 21:34:09 -- scripts/common.sh@350 -- # local d=0 00:18:47.191 21:34:09 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:47.191 21:34:09 -- scripts/common.sh@352 -- # echo 0 00:18:47.191 21:34:09 -- scripts/common.sh@362 -- # ver1[v]=0 00:18:47.191 21:34:09 -- scripts/common.sh@363 -- # decimal 0 00:18:47.191 21:34:09 -- scripts/common.sh@350 -- # local d=0 00:18:47.191 21:34:09 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:47.191 21:34:09 -- scripts/common.sh@352 -- # echo 0 00:18:47.191 21:34:09 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:47.191 21:34:09 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:47.191 21:34:09 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:47.191 21:34:09 -- scripts/common.sh@361 -- # (( v++ )) 00:18:47.191 21:34:09 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.191 21:34:09 -- scripts/common.sh@362 -- # decimal 9 00:18:47.191 21:34:09 -- scripts/common.sh@350 -- # local d=9 00:18:47.191 21:34:09 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:47.191 21:34:09 -- scripts/common.sh@352 -- # echo 9 00:18:47.191 21:34:09 -- scripts/common.sh@362 -- # ver1[v]=9 00:18:47.191 21:34:09 -- scripts/common.sh@363 -- # decimal 0 00:18:47.191 21:34:09 -- scripts/common.sh@350 -- # local d=0 00:18:47.191 21:34:09 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:47.191 21:34:09 -- scripts/common.sh@352 -- # echo 0 00:18:47.191 21:34:09 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:47.191 21:34:09 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:47.191 21:34:09 -- scripts/common.sh@364 -- # return 0 00:18:47.191 21:34:09 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:47.191 21:34:09 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:47.191 21:34:09 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:47.191 21:34:09 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:47.191 21:34:09 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:47.191 21:34:09 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:47.191 21:34:09 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:47.191 21:34:09 -- fips/fips.sh@113 -- # build_openssl_config 00:18:47.191 21:34:09 -- fips/fips.sh@37 -- # cat 00:18:47.191 21:34:09 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:47.191 21:34:09 -- fips/fips.sh@58 -- # cat - 00:18:47.191 21:34:09 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:47.191 21:34:09 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:47.191 21:34:09 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:47.191 21:34:09 -- fips/fips.sh@116 -- # openssl list -providers 00:18:47.191 21:34:09 -- fips/fips.sh@116 -- # grep name 00:18:47.191 21:34:09 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:47.191 21:34:09 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:47.191 21:34:09 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:47.191 21:34:09 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:47.191 21:34:09 -- common/autotest_common.sh@638 -- # local es=0 00:18:47.191 21:34:09 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:47.191 21:34:09 -- fips/fips.sh@127 -- # : 00:18:47.191 21:34:09 -- common/autotest_common.sh@626 -- # local arg=openssl 00:18:47.191 21:34:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:47.191 21:34:09 -- common/autotest_common.sh@630 -- # type -t openssl 00:18:47.191 21:34:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:47.191 21:34:09 -- common/autotest_common.sh@632 -- # type -P openssl 00:18:47.191 21:34:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:47.191 21:34:09 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:18:47.191 21:34:09 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:18:47.191 21:34:09 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:18:47.191 Error setting digest 00:18:47.191 00B23ADF077F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:47.191 00B23ADF077F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:47.191 21:34:10 -- common/autotest_common.sh@641 -- # es=1 00:18:47.191 21:34:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:47.191 21:34:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:47.191 21:34:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:47.191 21:34:10 -- fips/fips.sh@130 -- # nvmftestinit 00:18:47.191 21:34:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:47.191 21:34:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.191 21:34:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:47.191 21:34:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:47.191 21:34:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:47.191 21:34:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.191 21:34:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.191 21:34:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.191 21:34:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:47.191 21:34:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:47.192 21:34:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:47.192 21:34:10 -- common/autotest_common.sh@10 -- # set +x 00:18:53.762 21:34:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:53.762 21:34:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:53.762 21:34:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:53.762 21:34:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:53.762 21:34:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:53.762 21:34:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:53.762 21:34:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:53.762 21:34:16 -- nvmf/common.sh@295 -- # net_devs=() 00:18:53.762 21:34:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:53.762 21:34:16 -- nvmf/common.sh@296 -- # e810=() 00:18:53.762 21:34:16 -- nvmf/common.sh@296 -- # local -ga e810 00:18:53.762 21:34:16 -- nvmf/common.sh@297 -- # x722=() 00:18:53.762 21:34:16 -- nvmf/common.sh@297 -- # local -ga x722 00:18:53.762 21:34:16 -- nvmf/common.sh@298 -- # mlx=() 00:18:53.762 21:34:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:53.762 21:34:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:53.762 21:34:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:53.763 21:34:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:53.763 21:34:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:53.763 21:34:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:53.763 21:34:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:53.763 21:34:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:53.763 21:34:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:53.763 21:34:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:53.763 21:34:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:53.763 21:34:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:53.763 21:34:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:53.763 21:34:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:53.763 21:34:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:53.763 21:34:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:53.763 21:34:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:53.763 21:34:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:53.763 21:34:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:53.763 21:34:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:53.763 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:53.763 21:34:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:53.763 21:34:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:53.763 21:34:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.763 21:34:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.763 21:34:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:53.763 21:34:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:53.763 21:34:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:53.763 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:53.763 21:34:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:53.763 21:34:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:53.763 21:34:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.763 21:34:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.763 21:34:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:53.763 21:34:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:53.763 21:34:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:53.763 21:34:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:53.763 21:34:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:53.763 21:34:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.763 21:34:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:53.763 21:34:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.763 21:34:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:53.763 Found net devices under 0000:af:00.0: cvl_0_0 00:18:53.763 21:34:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.763 21:34:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:53.763 21:34:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.763 21:34:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:53.763 21:34:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.763 21:34:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:53.763 Found net devices under 0000:af:00.1: cvl_0_1 00:18:53.763 21:34:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.763 21:34:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:53.763 21:34:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:53.763 21:34:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:53.763 21:34:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:53.763 21:34:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:53.763 21:34:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:53.763 21:34:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:53.763 21:34:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:53.763 21:34:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:53.763 21:34:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:53.763 21:34:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:53.763 21:34:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:53.763 21:34:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:53.763 21:34:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:53.763 21:34:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:53.763 21:34:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:53.763 21:34:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:53.763 21:34:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:53.763 21:34:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:53.763 21:34:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:53.763 21:34:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:53.763 21:34:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:53.763 21:34:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:53.763 21:34:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:53.763 21:34:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:53.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:18:53.763 00:18:53.763 --- 10.0.0.2 ping statistics --- 00:18:53.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.763 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:18:53.763 21:34:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:53.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:18:53.763 00:18:53.763 --- 10.0.0.1 ping statistics --- 00:18:53.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.763 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:18:53.763 21:34:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.763 21:34:16 -- nvmf/common.sh@411 -- # return 0 00:18:53.763 21:34:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:53.763 21:34:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.763 21:34:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:53.763 21:34:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:53.763 21:34:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.763 21:34:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:53.763 21:34:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:53.763 21:34:16 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:53.763 21:34:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:53.763 21:34:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:53.763 21:34:16 -- common/autotest_common.sh@10 -- # set +x 00:18:53.763 21:34:16 -- nvmf/common.sh@470 -- # nvmfpid=2887563 00:18:53.763 21:34:16 -- nvmf/common.sh@471 -- # waitforlisten 2887563 00:18:53.763 21:34:16 -- common/autotest_common.sh@817 -- # '[' -z 2887563 ']' 00:18:53.763 21:34:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.763 21:34:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:53.763 21:34:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.764 21:34:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:53.764 21:34:16 -- common/autotest_common.sh@10 -- # set +x 00:18:53.764 21:34:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:53.764 [2024-04-24 21:34:16.489373] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:18:53.764 [2024-04-24 21:34:16.489425] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.764 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.764 [2024-04-24 21:34:16.561314] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.764 [2024-04-24 21:34:16.633435] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.764 [2024-04-24 21:34:16.633476] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.764 [2024-04-24 21:34:16.633485] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.764 [2024-04-24 21:34:16.633494] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.764 [2024-04-24 21:34:16.633501] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.764 [2024-04-24 21:34:16.633527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.700 21:34:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:54.700 21:34:17 -- common/autotest_common.sh@850 -- # return 0 00:18:54.700 21:34:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:54.701 21:34:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:54.701 21:34:17 -- common/autotest_common.sh@10 -- # set +x 00:18:54.701 21:34:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.701 21:34:17 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:54.701 21:34:17 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:54.701 21:34:17 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:54.701 21:34:17 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:54.701 21:34:17 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:54.701 21:34:17 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:54.701 21:34:17 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:54.701 21:34:17 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:54.701 [2024-04-24 21:34:17.444138] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.701 [2024-04-24 21:34:17.460140] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:54.701 [2024-04-24 21:34:17.460313] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.701 [2024-04-24 21:34:17.488419] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:54.701 malloc0 00:18:54.701 21:34:17 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:54.701 21:34:17 -- fips/fips.sh@147 -- # bdevperf_pid=2887618 00:18:54.701 21:34:17 -- fips/fips.sh@148 -- # waitforlisten 2887618 /var/tmp/bdevperf.sock 00:18:54.701 21:34:17 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:54.701 21:34:17 -- common/autotest_common.sh@817 -- # '[' -z 2887618 ']' 00:18:54.701 21:34:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.701 21:34:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:54.701 21:34:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.701 21:34:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:54.701 21:34:17 -- common/autotest_common.sh@10 -- # set +x 00:18:54.701 [2024-04-24 21:34:17.570022] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:18:54.701 [2024-04-24 21:34:17.570075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887618 ] 00:18:54.959 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.959 [2024-04-24 21:34:17.636988] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.959 [2024-04-24 21:34:17.706075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.527 21:34:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:55.527 21:34:18 -- common/autotest_common.sh@850 -- # return 0 00:18:55.527 21:34:18 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:55.787 [2024-04-24 21:34:18.483788] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:55.787 [2024-04-24 21:34:18.483871] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:55.787 TLSTESTn1 00:18:55.787 21:34:18 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:55.787 Running I/O for 10 seconds... 00:19:08.045 00:19:08.045 Latency(us) 00:19:08.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.045 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:08.045 Verification LBA range: start 0x0 length 0x2000 00:19:08.045 TLSTESTn1 : 10.07 1589.49 6.21 0.00 0.00 80282.57 6973.03 122473.68 00:19:08.045 =================================================================================================================== 00:19:08.045 Total : 1589.49 6.21 0.00 0.00 80282.57 6973.03 122473.68 00:19:08.045 0 00:19:08.045 21:34:28 -- fips/fips.sh@1 -- # cleanup 00:19:08.045 21:34:28 -- fips/fips.sh@15 -- # process_shm --id 0 00:19:08.045 21:34:28 -- common/autotest_common.sh@794 -- # type=--id 00:19:08.045 21:34:28 -- common/autotest_common.sh@795 -- # id=0 00:19:08.045 21:34:28 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:19:08.045 21:34:28 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:08.045 21:34:28 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:19:08.045 21:34:28 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:19:08.045 21:34:28 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:19:08.045 21:34:28 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:08.045 nvmf_trace.0 00:19:08.045 21:34:28 -- common/autotest_common.sh@809 -- # return 0 00:19:08.045 21:34:28 -- fips/fips.sh@16 -- # killprocess 2887618 00:19:08.045 21:34:28 -- common/autotest_common.sh@936 -- # '[' -z 2887618 ']' 00:19:08.045 21:34:28 -- common/autotest_common.sh@940 -- # kill -0 2887618 00:19:08.045 21:34:28 -- common/autotest_common.sh@941 -- # uname 00:19:08.045 21:34:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:08.045 21:34:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2887618 00:19:08.045 21:34:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:08.045 21:34:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:08.045 21:34:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2887618' 00:19:08.045 killing process with pid 2887618 00:19:08.045 21:34:28 -- common/autotest_common.sh@955 -- # kill 2887618 00:19:08.045 Received shutdown signal, test time was about 10.000000 seconds 00:19:08.045 00:19:08.045 Latency(us) 00:19:08.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.045 =================================================================================================================== 00:19:08.045 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:08.045 [2024-04-24 21:34:28.916970] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:08.045 21:34:28 -- common/autotest_common.sh@960 -- # wait 2887618 00:19:08.045 21:34:29 -- fips/fips.sh@17 -- # nvmftestfini 00:19:08.045 21:34:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:08.045 21:34:29 -- nvmf/common.sh@117 -- # sync 00:19:08.045 21:34:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:08.045 21:34:29 -- nvmf/common.sh@120 -- # set +e 00:19:08.045 21:34:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:08.045 21:34:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:08.045 rmmod nvme_tcp 00:19:08.045 rmmod nvme_fabrics 00:19:08.045 rmmod nvme_keyring 00:19:08.045 21:34:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:08.045 21:34:29 -- nvmf/common.sh@124 -- # set -e 00:19:08.045 21:34:29 -- nvmf/common.sh@125 -- # return 0 00:19:08.045 21:34:29 -- nvmf/common.sh@478 -- # '[' -n 2887563 ']' 00:19:08.045 21:34:29 -- nvmf/common.sh@479 -- # killprocess 2887563 00:19:08.045 21:34:29 -- common/autotest_common.sh@936 -- # '[' -z 2887563 ']' 00:19:08.045 21:34:29 -- common/autotest_common.sh@940 -- # kill -0 2887563 00:19:08.045 21:34:29 -- common/autotest_common.sh@941 -- # uname 00:19:08.045 21:34:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:08.045 21:34:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2887563 00:19:08.045 21:34:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:08.045 21:34:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:08.045 21:34:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2887563' 00:19:08.045 killing process with pid 2887563 00:19:08.045 21:34:29 -- common/autotest_common.sh@955 -- # kill 2887563 00:19:08.045 [2024-04-24 21:34:29.249150] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:08.045 21:34:29 -- common/autotest_common.sh@960 -- # wait 2887563 00:19:08.045 21:34:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:08.045 21:34:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:08.045 21:34:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:08.045 21:34:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:08.045 21:34:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:08.046 21:34:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.046 21:34:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.046 21:34:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.984 21:34:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:08.984 21:34:31 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:08.984 00:19:08.984 real 0m21.843s 00:19:08.984 user 0m22.103s 00:19:08.984 sys 0m10.563s 00:19:08.984 21:34:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:08.984 21:34:31 -- common/autotest_common.sh@10 -- # set +x 00:19:08.984 ************************************ 00:19:08.984 END TEST nvmf_fips 00:19:08.984 ************************************ 00:19:08.984 21:34:31 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:19:08.984 21:34:31 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:19:08.984 21:34:31 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:19:08.984 21:34:31 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:19:08.984 21:34:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:08.984 21:34:31 -- common/autotest_common.sh@10 -- # set +x 00:19:15.556 21:34:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:15.556 21:34:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:15.556 21:34:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:15.556 21:34:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:15.556 21:34:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:15.556 21:34:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:15.556 21:34:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:15.556 21:34:37 -- nvmf/common.sh@295 -- # net_devs=() 00:19:15.556 21:34:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:15.556 21:34:37 -- nvmf/common.sh@296 -- # e810=() 00:19:15.556 21:34:37 -- nvmf/common.sh@296 -- # local -ga e810 00:19:15.557 21:34:37 -- nvmf/common.sh@297 -- # x722=() 00:19:15.557 21:34:37 -- nvmf/common.sh@297 -- # local -ga x722 00:19:15.557 21:34:37 -- nvmf/common.sh@298 -- # mlx=() 00:19:15.557 21:34:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:15.557 21:34:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:15.557 21:34:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:15.557 21:34:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:15.557 21:34:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:15.557 21:34:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:15.557 21:34:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:15.557 21:34:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:15.557 21:34:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:15.557 21:34:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:15.557 21:34:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:15.557 21:34:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:15.557 21:34:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:15.557 21:34:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:15.557 21:34:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:15.557 21:34:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:15.557 21:34:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:15.557 21:34:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:15.557 21:34:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:15.557 21:34:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:15.557 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:15.557 21:34:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:15.557 21:34:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:15.557 21:34:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.557 21:34:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.557 21:34:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:15.557 21:34:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:15.557 21:34:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:15.557 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:15.557 21:34:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:15.557 21:34:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:15.557 21:34:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.557 21:34:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.557 21:34:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:15.557 21:34:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:15.557 21:34:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:15.557 21:34:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:15.557 21:34:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:15.557 21:34:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.557 21:34:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:15.557 21:34:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.557 21:34:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:15.557 Found net devices under 0000:af:00.0: cvl_0_0 00:19:15.557 21:34:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.557 21:34:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:15.557 21:34:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.557 21:34:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:15.557 21:34:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.557 21:34:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:15.557 Found net devices under 0000:af:00.1: cvl_0_1 00:19:15.557 21:34:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.557 21:34:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:15.557 21:34:37 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:15.557 21:34:37 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:19:15.557 21:34:37 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:15.557 21:34:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:15.557 21:34:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:15.557 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:19:15.557 ************************************ 00:19:15.557 START TEST nvmf_perf_adq 00:19:15.557 ************************************ 00:19:15.557 21:34:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:15.557 * Looking for test storage... 00:19:15.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:15.557 21:34:37 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:15.557 21:34:37 -- nvmf/common.sh@7 -- # uname -s 00:19:15.557 21:34:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.557 21:34:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.557 21:34:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.557 21:34:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.557 21:34:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.557 21:34:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.557 21:34:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.557 21:34:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.557 21:34:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.557 21:34:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.557 21:34:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:15.557 21:34:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:15.557 21:34:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.557 21:34:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.557 21:34:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:15.557 21:34:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.557 21:34:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:15.557 21:34:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.557 21:34:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.557 21:34:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.557 21:34:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.557 21:34:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.557 21:34:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.557 21:34:37 -- paths/export.sh@5 -- # export PATH 00:19:15.557 21:34:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.557 21:34:37 -- nvmf/common.sh@47 -- # : 0 00:19:15.557 21:34:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:15.557 21:34:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:15.557 21:34:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.557 21:34:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.557 21:34:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.557 21:34:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:15.557 21:34:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:15.557 21:34:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:15.557 21:34:37 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:15.557 21:34:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:15.557 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:19:22.131 21:34:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:22.131 21:34:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:22.131 21:34:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:22.131 21:34:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:22.131 21:34:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:22.131 21:34:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:22.131 21:34:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:22.131 21:34:43 -- nvmf/common.sh@295 -- # net_devs=() 00:19:22.131 21:34:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:22.131 21:34:43 -- nvmf/common.sh@296 -- # e810=() 00:19:22.131 21:34:43 -- nvmf/common.sh@296 -- # local -ga e810 00:19:22.131 21:34:43 -- nvmf/common.sh@297 -- # x722=() 00:19:22.131 21:34:43 -- nvmf/common.sh@297 -- # local -ga x722 00:19:22.131 21:34:43 -- nvmf/common.sh@298 -- # mlx=() 00:19:22.131 21:34:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:22.131 21:34:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.131 21:34:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.131 21:34:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.131 21:34:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.131 21:34:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.131 21:34:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.131 21:34:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.131 21:34:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.131 21:34:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.131 21:34:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.131 21:34:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.131 21:34:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:22.131 21:34:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:22.131 21:34:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:22.131 21:34:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:22.131 21:34:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:22.131 21:34:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:22.131 21:34:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.131 21:34:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:22.131 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:22.131 21:34:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.131 21:34:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.131 21:34:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.131 21:34:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.131 21:34:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.131 21:34:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.131 21:34:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:22.131 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:22.131 21:34:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.131 21:34:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.131 21:34:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.131 21:34:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.131 21:34:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.131 21:34:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:22.131 21:34:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:22.131 21:34:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:22.132 21:34:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.132 21:34:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.132 21:34:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:22.132 21:34:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.132 21:34:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:22.132 Found net devices under 0000:af:00.0: cvl_0_0 00:19:22.132 21:34:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.132 21:34:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.132 21:34:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.132 21:34:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:22.132 21:34:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.132 21:34:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:22.132 Found net devices under 0000:af:00.1: cvl_0_1 00:19:22.132 21:34:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.132 21:34:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:22.132 21:34:43 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:22.132 21:34:43 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:22.132 21:34:43 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:22.132 21:34:43 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:19:22.132 21:34:43 -- target/perf_adq.sh@52 -- # rmmod ice 00:19:22.391 21:34:45 -- target/perf_adq.sh@53 -- # modprobe ice 00:19:24.936 21:34:47 -- target/perf_adq.sh@54 -- # sleep 5 00:19:30.227 21:34:52 -- target/perf_adq.sh@67 -- # nvmftestinit 00:19:30.227 21:34:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:30.227 21:34:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.227 21:34:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:30.227 21:34:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:30.227 21:34:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:30.227 21:34:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.227 21:34:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.227 21:34:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.227 21:34:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:30.227 21:34:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:30.227 21:34:52 -- common/autotest_common.sh@10 -- # set +x 00:19:30.227 21:34:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:30.227 21:34:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:30.227 21:34:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:30.227 21:34:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:30.227 21:34:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:30.227 21:34:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:30.227 21:34:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:30.227 21:34:52 -- nvmf/common.sh@295 -- # net_devs=() 00:19:30.227 21:34:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:30.227 21:34:52 -- nvmf/common.sh@296 -- # e810=() 00:19:30.227 21:34:52 -- nvmf/common.sh@296 -- # local -ga e810 00:19:30.227 21:34:52 -- nvmf/common.sh@297 -- # x722=() 00:19:30.227 21:34:52 -- nvmf/common.sh@297 -- # local -ga x722 00:19:30.227 21:34:52 -- nvmf/common.sh@298 -- # mlx=() 00:19:30.227 21:34:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:30.227 21:34:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:30.227 21:34:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:30.227 21:34:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:30.227 21:34:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:30.227 21:34:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:30.227 21:34:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:30.227 21:34:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:30.227 21:34:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:30.227 21:34:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:30.227 21:34:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:30.227 21:34:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:30.227 21:34:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:30.227 21:34:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:30.227 21:34:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:30.227 21:34:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.227 21:34:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:30.227 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:30.227 21:34:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.227 21:34:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:30.227 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:30.227 21:34:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:30.227 21:34:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.227 21:34:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.227 21:34:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:30.227 21:34:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.227 21:34:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:30.227 Found net devices under 0000:af:00.0: cvl_0_0 00:19:30.227 21:34:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.227 21:34:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.227 21:34:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.227 21:34:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:30.227 21:34:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.227 21:34:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:30.227 Found net devices under 0000:af:00.1: cvl_0_1 00:19:30.227 21:34:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.227 21:34:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:30.227 21:34:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:30.227 21:34:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:30.227 21:34:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.227 21:34:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.227 21:34:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.227 21:34:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:30.227 21:34:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:30.227 21:34:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:30.227 21:34:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:30.227 21:34:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:30.227 21:34:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.227 21:34:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:30.227 21:34:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:30.227 21:34:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:30.227 21:34:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:30.227 21:34:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:30.227 21:34:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:30.227 21:34:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:30.227 21:34:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:30.227 21:34:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:30.227 21:34:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:30.227 21:34:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:30.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:19:30.227 00:19:30.227 --- 10.0.0.2 ping statistics --- 00:19:30.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.227 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:19:30.227 21:34:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:30.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:19:30.227 00:19:30.227 --- 10.0.0.1 ping statistics --- 00:19:30.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.227 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:19:30.227 21:34:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.227 21:34:52 -- nvmf/common.sh@411 -- # return 0 00:19:30.227 21:34:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:30.227 21:34:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.227 21:34:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:30.227 21:34:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.227 21:34:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:30.227 21:34:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:30.227 21:34:52 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:30.227 21:34:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:30.227 21:34:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:30.227 21:34:52 -- common/autotest_common.sh@10 -- # set +x 00:19:30.227 21:34:52 -- nvmf/common.sh@470 -- # nvmfpid=2898057 00:19:30.227 21:34:52 -- nvmf/common.sh@471 -- # waitforlisten 2898057 00:19:30.227 21:34:52 -- common/autotest_common.sh@817 -- # '[' -z 2898057 ']' 00:19:30.227 21:34:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.227 21:34:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:30.227 21:34:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.227 21:34:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:30.227 21:34:52 -- common/autotest_common.sh@10 -- # set +x 00:19:30.227 21:34:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:30.227 [2024-04-24 21:34:52.686994] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:19:30.227 [2024-04-24 21:34:52.687040] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.227 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.227 [2024-04-24 21:34:52.761025] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:30.227 [2024-04-24 21:34:52.835678] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.227 [2024-04-24 21:34:52.835715] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.227 [2024-04-24 21:34:52.835725] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.227 [2024-04-24 21:34:52.835733] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.227 [2024-04-24 21:34:52.835740] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.227 [2024-04-24 21:34:52.835834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.227 [2024-04-24 21:34:52.835946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.227 [2024-04-24 21:34:52.836032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:30.228 [2024-04-24 21:34:52.836034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.797 21:34:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:30.797 21:34:53 -- common/autotest_common.sh@850 -- # return 0 00:19:30.797 21:34:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:30.797 21:34:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:30.797 21:34:53 -- common/autotest_common.sh@10 -- # set +x 00:19:30.797 21:34:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.797 21:34:53 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:19:30.797 21:34:53 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:30.797 21:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.797 21:34:53 -- common/autotest_common.sh@10 -- # set +x 00:19:30.797 21:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.797 21:34:53 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:19:30.797 21:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.797 21:34:53 -- common/autotest_common.sh@10 -- # set +x 00:19:30.797 21:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.797 21:34:53 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:30.797 21:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.797 21:34:53 -- common/autotest_common.sh@10 -- # set +x 00:19:30.797 [2024-04-24 21:34:53.642697] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.797 21:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.797 21:34:53 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:30.797 21:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.797 21:34:53 -- common/autotest_common.sh@10 -- # set +x 00:19:30.797 Malloc1 00:19:30.797 21:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.797 21:34:53 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:30.797 21:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.797 21:34:53 -- common/autotest_common.sh@10 -- # set +x 00:19:31.056 21:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.056 21:34:53 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:31.056 21:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.056 21:34:53 -- common/autotest_common.sh@10 -- # set +x 00:19:31.056 21:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.056 21:34:53 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.056 21:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.056 21:34:53 -- common/autotest_common.sh@10 -- # set +x 00:19:31.057 [2024-04-24 21:34:53.697556] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.057 21:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.057 21:34:53 -- target/perf_adq.sh@73 -- # perfpid=2898121 00:19:31.057 21:34:53 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:31.057 21:34:53 -- target/perf_adq.sh@74 -- # sleep 2 00:19:31.057 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.965 21:34:55 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:19:32.965 21:34:55 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:32.965 21:34:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.965 21:34:55 -- common/autotest_common.sh@10 -- # set +x 00:19:32.965 21:34:55 -- target/perf_adq.sh@76 -- # wc -l 00:19:32.965 21:34:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.965 21:34:55 -- target/perf_adq.sh@76 -- # count=4 00:19:32.965 21:34:55 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:19:32.965 21:34:55 -- target/perf_adq.sh@81 -- # wait 2898121 00:19:41.091 Initializing NVMe Controllers 00:19:41.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:41.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:41.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:41.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:41.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:41.091 Initialization complete. Launching workers. 00:19:41.091 ======================================================== 00:19:41.091 Latency(us) 00:19:41.091 Device Information : IOPS MiB/s Average min max 00:19:41.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10246.20 40.02 6264.86 1469.57 47238.39 00:19:41.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8893.20 34.74 7197.02 2291.85 48458.77 00:19:41.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10352.90 40.44 6183.05 1543.57 11091.15 00:19:41.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10288.40 40.19 6221.14 1568.64 11116.58 00:19:41.091 ======================================================== 00:19:41.091 Total : 39780.70 155.39 6440.65 1469.57 48458.77 00:19:41.091 00:19:41.091 21:35:03 -- target/perf_adq.sh@82 -- # nvmftestfini 00:19:41.091 21:35:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:41.091 21:35:03 -- nvmf/common.sh@117 -- # sync 00:19:41.091 21:35:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:41.091 21:35:03 -- nvmf/common.sh@120 -- # set +e 00:19:41.091 21:35:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:41.091 21:35:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:41.091 rmmod nvme_tcp 00:19:41.091 rmmod nvme_fabrics 00:19:41.091 rmmod nvme_keyring 00:19:41.091 21:35:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:41.091 21:35:03 -- nvmf/common.sh@124 -- # set -e 00:19:41.091 21:35:03 -- nvmf/common.sh@125 -- # return 0 00:19:41.091 21:35:03 -- nvmf/common.sh@478 -- # '[' -n 2898057 ']' 00:19:41.091 21:35:03 -- nvmf/common.sh@479 -- # killprocess 2898057 00:19:41.091 21:35:03 -- common/autotest_common.sh@936 -- # '[' -z 2898057 ']' 00:19:41.091 21:35:03 -- common/autotest_common.sh@940 -- # kill -0 2898057 00:19:41.091 21:35:03 -- common/autotest_common.sh@941 -- # uname 00:19:41.351 21:35:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:41.351 21:35:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2898057 00:19:41.351 21:35:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:41.351 21:35:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:41.351 21:35:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2898057' 00:19:41.351 killing process with pid 2898057 00:19:41.351 21:35:04 -- common/autotest_common.sh@955 -- # kill 2898057 00:19:41.351 21:35:04 -- common/autotest_common.sh@960 -- # wait 2898057 00:19:41.611 21:35:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:41.612 21:35:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:41.612 21:35:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:41.612 21:35:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:41.612 21:35:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:41.612 21:35:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.612 21:35:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.612 21:35:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.519 21:35:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:43.519 21:35:06 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:19:43.519 21:35:06 -- target/perf_adq.sh@52 -- # rmmod ice 00:19:44.916 21:35:07 -- target/perf_adq.sh@53 -- # modprobe ice 00:19:47.456 21:35:09 -- target/perf_adq.sh@54 -- # sleep 5 00:19:52.761 21:35:14 -- target/perf_adq.sh@87 -- # nvmftestinit 00:19:52.761 21:35:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:52.761 21:35:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.761 21:35:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:52.761 21:35:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:52.761 21:35:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:52.761 21:35:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.761 21:35:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.761 21:35:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.761 21:35:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:52.761 21:35:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:52.761 21:35:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:52.761 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:19:52.761 21:35:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:52.761 21:35:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:52.761 21:35:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:52.761 21:35:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:52.761 21:35:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:52.761 21:35:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:52.761 21:35:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:52.761 21:35:14 -- nvmf/common.sh@295 -- # net_devs=() 00:19:52.761 21:35:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:52.761 21:35:14 -- nvmf/common.sh@296 -- # e810=() 00:19:52.761 21:35:14 -- nvmf/common.sh@296 -- # local -ga e810 00:19:52.761 21:35:14 -- nvmf/common.sh@297 -- # x722=() 00:19:52.761 21:35:14 -- nvmf/common.sh@297 -- # local -ga x722 00:19:52.761 21:35:14 -- nvmf/common.sh@298 -- # mlx=() 00:19:52.761 21:35:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:52.761 21:35:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.761 21:35:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.762 21:35:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.762 21:35:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.762 21:35:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.762 21:35:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.762 21:35:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.762 21:35:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.762 21:35:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.762 21:35:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.762 21:35:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.762 21:35:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:52.762 21:35:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:52.762 21:35:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:52.762 21:35:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:52.762 21:35:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:52.762 21:35:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:52.762 21:35:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.762 21:35:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:52.762 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:52.762 21:35:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.762 21:35:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.762 21:35:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.762 21:35:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.762 21:35:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.762 21:35:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.762 21:35:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:52.762 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:52.762 21:35:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.762 21:35:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.762 21:35:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.762 21:35:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.762 21:35:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.762 21:35:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:52.762 21:35:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:52.762 21:35:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:52.762 21:35:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.762 21:35:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.762 21:35:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:52.762 21:35:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.762 21:35:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:52.762 Found net devices under 0000:af:00.0: cvl_0_0 00:19:52.762 21:35:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.762 21:35:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.762 21:35:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.762 21:35:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:52.762 21:35:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.762 21:35:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:52.762 Found net devices under 0000:af:00.1: cvl_0_1 00:19:52.762 21:35:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.762 21:35:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:52.762 21:35:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:52.762 21:35:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:52.762 21:35:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:52.762 21:35:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:52.762 21:35:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.762 21:35:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.762 21:35:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.762 21:35:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:52.762 21:35:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:52.762 21:35:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:52.762 21:35:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:52.762 21:35:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:52.762 21:35:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.762 21:35:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:52.762 21:35:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:52.762 21:35:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:52.762 21:35:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:52.762 21:35:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:52.762 21:35:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:52.762 21:35:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:52.762 21:35:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:52.762 21:35:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:52.762 21:35:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:52.762 21:35:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:52.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:19:52.762 00:19:52.762 --- 10.0.0.2 ping statistics --- 00:19:52.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.762 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:19:52.762 21:35:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:52.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:19:52.762 00:19:52.762 --- 10.0.0.1 ping statistics --- 00:19:52.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.762 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:19:52.762 21:35:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.762 21:35:15 -- nvmf/common.sh@411 -- # return 0 00:19:52.762 21:35:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:52.762 21:35:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.762 21:35:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:52.762 21:35:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:52.762 21:35:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.762 21:35:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:52.762 21:35:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:52.762 21:35:15 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:19:52.762 21:35:15 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:52.762 21:35:15 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:52.762 21:35:15 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:52.762 net.core.busy_poll = 1 00:19:52.762 21:35:15 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:52.762 net.core.busy_read = 1 00:19:52.762 21:35:15 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:52.762 21:35:15 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:52.762 21:35:15 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:52.762 21:35:15 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:52.762 21:35:15 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:52.762 21:35:15 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:52.762 21:35:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:52.762 21:35:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:52.762 21:35:15 -- common/autotest_common.sh@10 -- # set +x 00:19:52.762 21:35:15 -- nvmf/common.sh@470 -- # nvmfpid=2902194 00:19:52.762 21:35:15 -- nvmf/common.sh@471 -- # waitforlisten 2902194 00:19:52.762 21:35:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:52.762 21:35:15 -- common/autotest_common.sh@817 -- # '[' -z 2902194 ']' 00:19:52.762 21:35:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.762 21:35:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:52.762 21:35:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.762 21:35:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:52.762 21:35:15 -- common/autotest_common.sh@10 -- # set +x 00:19:52.762 [2024-04-24 21:35:15.483790] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:19:52.762 [2024-04-24 21:35:15.483838] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.762 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.762 [2024-04-24 21:35:15.559937] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:52.762 [2024-04-24 21:35:15.631683] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.762 [2024-04-24 21:35:15.631725] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.762 [2024-04-24 21:35:15.631734] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.762 [2024-04-24 21:35:15.631742] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.762 [2024-04-24 21:35:15.631749] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.762 [2024-04-24 21:35:15.631842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.762 [2024-04-24 21:35:15.631952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.762 [2024-04-24 21:35:15.632035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.762 [2024-04-24 21:35:15.632037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.701 21:35:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:53.701 21:35:16 -- common/autotest_common.sh@850 -- # return 0 00:19:53.701 21:35:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:53.701 21:35:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:53.701 21:35:16 -- common/autotest_common.sh@10 -- # set +x 00:19:53.701 21:35:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.701 21:35:16 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:19:53.701 21:35:16 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:53.701 21:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.701 21:35:16 -- common/autotest_common.sh@10 -- # set +x 00:19:53.701 21:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.701 21:35:16 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:19:53.701 21:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.701 21:35:16 -- common/autotest_common.sh@10 -- # set +x 00:19:53.701 21:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.701 21:35:16 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:53.701 21:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.701 21:35:16 -- common/autotest_common.sh@10 -- # set +x 00:19:53.701 [2024-04-24 21:35:16.429244] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.701 21:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.701 21:35:16 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:53.701 21:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.701 21:35:16 -- common/autotest_common.sh@10 -- # set +x 00:19:53.701 Malloc1 00:19:53.701 21:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.701 21:35:16 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:53.701 21:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.701 21:35:16 -- common/autotest_common.sh@10 -- # set +x 00:19:53.701 21:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.701 21:35:16 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:53.701 21:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.701 21:35:16 -- common/autotest_common.sh@10 -- # set +x 00:19:53.701 21:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.701 21:35:16 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.701 21:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.701 21:35:16 -- common/autotest_common.sh@10 -- # set +x 00:19:53.701 [2024-04-24 21:35:16.479894] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.701 21:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.701 21:35:16 -- target/perf_adq.sh@94 -- # perfpid=2902472 00:19:53.701 21:35:16 -- target/perf_adq.sh@95 -- # sleep 2 00:19:53.701 21:35:16 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:53.701 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.236 21:35:18 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:19:56.236 21:35:18 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:56.236 21:35:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.236 21:35:18 -- common/autotest_common.sh@10 -- # set +x 00:19:56.236 21:35:18 -- target/perf_adq.sh@97 -- # wc -l 00:19:56.236 21:35:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.236 21:35:18 -- target/perf_adq.sh@97 -- # count=2 00:19:56.236 21:35:18 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:19:56.236 21:35:18 -- target/perf_adq.sh@103 -- # wait 2902472 00:20:04.367 Initializing NVMe Controllers 00:20:04.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:04.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:04.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:04.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:04.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:04.367 Initialization complete. Launching workers. 00:20:04.367 ======================================================== 00:20:04.367 Latency(us) 00:20:04.367 Device Information : IOPS MiB/s Average min max 00:20:04.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6702.40 26.18 9567.01 1800.80 53626.53 00:20:04.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6886.30 26.90 9304.10 1774.70 53593.47 00:20:04.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7418.80 28.98 8629.21 1423.91 53457.54 00:20:04.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7029.80 27.46 9105.55 1590.68 53200.40 00:20:04.367 ======================================================== 00:20:04.367 Total : 28037.29 109.52 9138.59 1423.91 53626.53 00:20:04.367 00:20:04.367 21:35:26 -- target/perf_adq.sh@104 -- # nvmftestfini 00:20:04.367 21:35:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:04.367 21:35:26 -- nvmf/common.sh@117 -- # sync 00:20:04.367 21:35:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:04.367 21:35:26 -- nvmf/common.sh@120 -- # set +e 00:20:04.367 21:35:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:04.367 21:35:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:04.367 rmmod nvme_tcp 00:20:04.367 rmmod nvme_fabrics 00:20:04.367 rmmod nvme_keyring 00:20:04.367 21:35:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:04.367 21:35:26 -- nvmf/common.sh@124 -- # set -e 00:20:04.367 21:35:26 -- nvmf/common.sh@125 -- # return 0 00:20:04.367 21:35:26 -- nvmf/common.sh@478 -- # '[' -n 2902194 ']' 00:20:04.367 21:35:26 -- nvmf/common.sh@479 -- # killprocess 2902194 00:20:04.367 21:35:26 -- common/autotest_common.sh@936 -- # '[' -z 2902194 ']' 00:20:04.367 21:35:26 -- common/autotest_common.sh@940 -- # kill -0 2902194 00:20:04.367 21:35:26 -- common/autotest_common.sh@941 -- # uname 00:20:04.367 21:35:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:04.367 21:35:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2902194 00:20:04.367 21:35:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:04.367 21:35:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:04.367 21:35:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2902194' 00:20:04.367 killing process with pid 2902194 00:20:04.367 21:35:26 -- common/autotest_common.sh@955 -- # kill 2902194 00:20:04.367 21:35:26 -- common/autotest_common.sh@960 -- # wait 2902194 00:20:04.367 21:35:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:04.367 21:35:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:04.367 21:35:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:04.367 21:35:26 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:04.367 21:35:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:04.367 21:35:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.367 21:35:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.367 21:35:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.670 21:35:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:07.670 21:35:30 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:20:07.670 00:20:07.670 real 0m52.439s 00:20:07.670 user 2m45.838s 00:20:07.670 sys 0m13.884s 00:20:07.670 21:35:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:07.670 21:35:30 -- common/autotest_common.sh@10 -- # set +x 00:20:07.670 ************************************ 00:20:07.670 END TEST nvmf_perf_adq 00:20:07.670 ************************************ 00:20:07.670 21:35:30 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:07.670 21:35:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:07.670 21:35:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:07.670 21:35:30 -- common/autotest_common.sh@10 -- # set +x 00:20:07.670 ************************************ 00:20:07.670 START TEST nvmf_shutdown 00:20:07.670 ************************************ 00:20:07.670 21:35:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:07.670 * Looking for test storage... 00:20:07.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:07.670 21:35:30 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.670 21:35:30 -- nvmf/common.sh@7 -- # uname -s 00:20:07.670 21:35:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.670 21:35:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.670 21:35:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.670 21:35:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.670 21:35:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.670 21:35:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.670 21:35:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.670 21:35:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.670 21:35:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.670 21:35:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.670 21:35:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:07.670 21:35:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:07.670 21:35:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.670 21:35:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.670 21:35:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.670 21:35:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.670 21:35:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.671 21:35:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.671 21:35:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.671 21:35:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.671 21:35:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.671 21:35:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.671 21:35:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.671 21:35:30 -- paths/export.sh@5 -- # export PATH 00:20:07.671 21:35:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.671 21:35:30 -- nvmf/common.sh@47 -- # : 0 00:20:07.671 21:35:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:07.671 21:35:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:07.671 21:35:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.671 21:35:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.671 21:35:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.671 21:35:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:07.671 21:35:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:07.671 21:35:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:07.671 21:35:30 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:07.671 21:35:30 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:07.671 21:35:30 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:07.671 21:35:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:07.671 21:35:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:07.671 21:35:30 -- common/autotest_common.sh@10 -- # set +x 00:20:07.929 ************************************ 00:20:07.929 START TEST nvmf_shutdown_tc1 00:20:07.929 ************************************ 00:20:07.929 21:35:30 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:20:07.929 21:35:30 -- target/shutdown.sh@74 -- # starttarget 00:20:07.929 21:35:30 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:07.929 21:35:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:07.929 21:35:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.929 21:35:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:07.929 21:35:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:07.929 21:35:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:07.929 21:35:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.929 21:35:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.929 21:35:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.929 21:35:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:07.929 21:35:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:07.929 21:35:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:07.929 21:35:30 -- common/autotest_common.sh@10 -- # set +x 00:20:14.503 21:35:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:14.503 21:35:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:14.503 21:35:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:14.503 21:35:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:14.503 21:35:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:14.503 21:35:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:14.503 21:35:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:14.503 21:35:36 -- nvmf/common.sh@295 -- # net_devs=() 00:20:14.503 21:35:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:14.503 21:35:36 -- nvmf/common.sh@296 -- # e810=() 00:20:14.503 21:35:36 -- nvmf/common.sh@296 -- # local -ga e810 00:20:14.503 21:35:36 -- nvmf/common.sh@297 -- # x722=() 00:20:14.503 21:35:36 -- nvmf/common.sh@297 -- # local -ga x722 00:20:14.503 21:35:36 -- nvmf/common.sh@298 -- # mlx=() 00:20:14.503 21:35:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:14.503 21:35:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.503 21:35:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.503 21:35:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.503 21:35:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.503 21:35:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.503 21:35:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.503 21:35:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.503 21:35:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.503 21:35:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.503 21:35:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.503 21:35:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.503 21:35:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:14.503 21:35:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:14.503 21:35:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:14.503 21:35:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:14.503 21:35:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:14.503 21:35:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:14.503 21:35:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.503 21:35:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:14.503 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:14.503 21:35:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.503 21:35:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.503 21:35:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.503 21:35:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.503 21:35:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.503 21:35:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.504 21:35:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:14.504 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:14.504 21:35:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.504 21:35:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.504 21:35:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.504 21:35:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.504 21:35:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.504 21:35:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:14.504 21:35:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:14.504 21:35:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:14.504 21:35:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.504 21:35:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.504 21:35:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:14.504 21:35:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.504 21:35:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:14.504 Found net devices under 0000:af:00.0: cvl_0_0 00:20:14.504 21:35:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.504 21:35:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.504 21:35:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.504 21:35:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:14.504 21:35:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.504 21:35:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:14.504 Found net devices under 0000:af:00.1: cvl_0_1 00:20:14.504 21:35:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.504 21:35:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:14.504 21:35:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:14.504 21:35:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:14.504 21:35:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:14.504 21:35:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:14.504 21:35:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.504 21:35:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.504 21:35:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.504 21:35:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:14.504 21:35:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.504 21:35:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.504 21:35:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:14.504 21:35:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.504 21:35:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.504 21:35:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:14.504 21:35:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:14.504 21:35:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.504 21:35:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.504 21:35:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.504 21:35:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.504 21:35:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:14.504 21:35:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.504 21:35:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.504 21:35:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.504 21:35:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:14.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:20:14.504 00:20:14.504 --- 10.0.0.2 ping statistics --- 00:20:14.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.504 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:20:14.504 21:35:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:20:14.504 00:20:14.504 --- 10.0.0.1 ping statistics --- 00:20:14.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.504 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:20:14.504 21:35:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.504 21:35:37 -- nvmf/common.sh@411 -- # return 0 00:20:14.504 21:35:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:14.504 21:35:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.504 21:35:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:14.504 21:35:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:14.504 21:35:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.504 21:35:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:14.504 21:35:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:14.504 21:35:37 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:14.504 21:35:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:14.504 21:35:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:14.504 21:35:37 -- common/autotest_common.sh@10 -- # set +x 00:20:14.504 21:35:37 -- nvmf/common.sh@470 -- # nvmfpid=2908146 00:20:14.504 21:35:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:14.504 21:35:37 -- nvmf/common.sh@471 -- # waitforlisten 2908146 00:20:14.504 21:35:37 -- common/autotest_common.sh@817 -- # '[' -z 2908146 ']' 00:20:14.504 21:35:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.504 21:35:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:14.504 21:35:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.504 21:35:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:14.504 21:35:37 -- common/autotest_common.sh@10 -- # set +x 00:20:14.504 [2024-04-24 21:35:37.315715] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:20:14.504 [2024-04-24 21:35:37.315760] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.504 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.504 [2024-04-24 21:35:37.389831] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:14.764 [2024-04-24 21:35:37.462471] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.764 [2024-04-24 21:35:37.462507] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.764 [2024-04-24 21:35:37.462517] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.764 [2024-04-24 21:35:37.462526] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.764 [2024-04-24 21:35:37.462537] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.764 [2024-04-24 21:35:37.462640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.764 [2024-04-24 21:35:37.462724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.764 [2024-04-24 21:35:37.462832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.764 [2024-04-24 21:35:37.462834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:15.333 21:35:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:15.333 21:35:38 -- common/autotest_common.sh@850 -- # return 0 00:20:15.333 21:35:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:15.333 21:35:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:15.333 21:35:38 -- common/autotest_common.sh@10 -- # set +x 00:20:15.333 21:35:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.333 21:35:38 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:15.333 21:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.333 21:35:38 -- common/autotest_common.sh@10 -- # set +x 00:20:15.333 [2024-04-24 21:35:38.184243] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.333 21:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.333 21:35:38 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:15.333 21:35:38 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:15.333 21:35:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:15.333 21:35:38 -- common/autotest_common.sh@10 -- # set +x 00:20:15.333 21:35:38 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:15.333 21:35:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:15.333 21:35:38 -- target/shutdown.sh@28 -- # cat 00:20:15.333 21:35:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:15.333 21:35:38 -- target/shutdown.sh@28 -- # cat 00:20:15.333 21:35:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:15.333 21:35:38 -- target/shutdown.sh@28 -- # cat 00:20:15.593 21:35:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:15.593 21:35:38 -- target/shutdown.sh@28 -- # cat 00:20:15.593 21:35:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:15.593 21:35:38 -- target/shutdown.sh@28 -- # cat 00:20:15.593 21:35:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:15.593 21:35:38 -- target/shutdown.sh@28 -- # cat 00:20:15.593 21:35:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:15.593 21:35:38 -- target/shutdown.sh@28 -- # cat 00:20:15.593 21:35:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:15.593 21:35:38 -- target/shutdown.sh@28 -- # cat 00:20:15.593 21:35:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:15.593 21:35:38 -- target/shutdown.sh@28 -- # cat 00:20:15.593 21:35:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:15.593 21:35:38 -- target/shutdown.sh@28 -- # cat 00:20:15.593 21:35:38 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:15.593 21:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.593 21:35:38 -- common/autotest_common.sh@10 -- # set +x 00:20:15.593 Malloc1 00:20:15.593 [2024-04-24 21:35:38.299032] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.593 Malloc2 00:20:15.593 Malloc3 00:20:15.593 Malloc4 00:20:15.593 Malloc5 00:20:15.852 Malloc6 00:20:15.852 Malloc7 00:20:15.852 Malloc8 00:20:15.852 Malloc9 00:20:15.852 Malloc10 00:20:15.852 21:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.852 21:35:38 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:15.853 21:35:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:15.853 21:35:38 -- common/autotest_common.sh@10 -- # set +x 00:20:15.853 21:35:38 -- target/shutdown.sh@78 -- # perfpid=2908464 00:20:15.853 21:35:38 -- target/shutdown.sh@79 -- # waitforlisten 2908464 /var/tmp/bdevperf.sock 00:20:15.853 21:35:38 -- common/autotest_common.sh@817 -- # '[' -z 2908464 ']' 00:20:15.853 21:35:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.853 21:35:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:15.853 21:35:38 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:15.853 21:35:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.853 21:35:38 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:15.853 21:35:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:15.853 21:35:38 -- common/autotest_common.sh@10 -- # set +x 00:20:15.853 21:35:38 -- nvmf/common.sh@521 -- # config=() 00:20:15.853 21:35:38 -- nvmf/common.sh@521 -- # local subsystem config 00:20:15.853 21:35:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:15.853 21:35:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:15.853 { 00:20:15.853 "params": { 00:20:15.853 "name": "Nvme$subsystem", 00:20:15.853 "trtype": "$TEST_TRANSPORT", 00:20:15.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.853 "adrfam": "ipv4", 00:20:15.853 "trsvcid": "$NVMF_PORT", 00:20:15.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.853 "hdgst": ${hdgst:-false}, 00:20:15.853 "ddgst": ${ddgst:-false} 00:20:15.853 }, 00:20:15.853 "method": "bdev_nvme_attach_controller" 00:20:15.853 } 00:20:15.853 EOF 00:20:15.853 )") 00:20:15.853 21:35:38 -- nvmf/common.sh@543 -- # cat 00:20:16.113 21:35:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:16.113 21:35:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:16.113 { 00:20:16.113 "params": { 00:20:16.113 "name": "Nvme$subsystem", 00:20:16.113 "trtype": "$TEST_TRANSPORT", 00:20:16.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.113 "adrfam": "ipv4", 00:20:16.113 "trsvcid": "$NVMF_PORT", 00:20:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.113 "hdgst": ${hdgst:-false}, 00:20:16.113 "ddgst": ${ddgst:-false} 00:20:16.113 }, 00:20:16.113 "method": "bdev_nvme_attach_controller" 00:20:16.113 } 00:20:16.113 EOF 00:20:16.113 )") 00:20:16.113 21:35:38 -- nvmf/common.sh@543 -- # cat 00:20:16.113 21:35:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:16.113 21:35:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:16.113 { 00:20:16.113 "params": { 00:20:16.113 "name": "Nvme$subsystem", 00:20:16.113 "trtype": "$TEST_TRANSPORT", 00:20:16.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.113 "adrfam": "ipv4", 00:20:16.113 "trsvcid": "$NVMF_PORT", 00:20:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.113 "hdgst": ${hdgst:-false}, 00:20:16.113 "ddgst": ${ddgst:-false} 00:20:16.113 }, 00:20:16.113 "method": "bdev_nvme_attach_controller" 00:20:16.113 } 00:20:16.113 EOF 00:20:16.113 )") 00:20:16.113 21:35:38 -- nvmf/common.sh@543 -- # cat 00:20:16.113 21:35:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:16.113 21:35:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:16.113 { 00:20:16.113 "params": { 00:20:16.113 "name": "Nvme$subsystem", 00:20:16.113 "trtype": "$TEST_TRANSPORT", 00:20:16.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.113 "adrfam": "ipv4", 00:20:16.113 "trsvcid": "$NVMF_PORT", 00:20:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.113 "hdgst": ${hdgst:-false}, 00:20:16.113 "ddgst": ${ddgst:-false} 00:20:16.113 }, 00:20:16.113 "method": "bdev_nvme_attach_controller" 00:20:16.113 } 00:20:16.113 EOF 00:20:16.113 )") 00:20:16.113 21:35:38 -- nvmf/common.sh@543 -- # cat 00:20:16.113 21:35:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:16.113 21:35:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:16.113 { 00:20:16.113 "params": { 00:20:16.113 "name": "Nvme$subsystem", 00:20:16.113 "trtype": "$TEST_TRANSPORT", 00:20:16.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.113 "adrfam": "ipv4", 00:20:16.113 "trsvcid": "$NVMF_PORT", 00:20:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.113 "hdgst": ${hdgst:-false}, 00:20:16.113 "ddgst": ${ddgst:-false} 00:20:16.113 }, 00:20:16.113 "method": "bdev_nvme_attach_controller" 00:20:16.113 } 00:20:16.113 EOF 00:20:16.113 )") 00:20:16.113 21:35:38 -- nvmf/common.sh@543 -- # cat 00:20:16.113 21:35:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:16.113 21:35:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:16.113 { 00:20:16.113 "params": { 00:20:16.113 "name": "Nvme$subsystem", 00:20:16.113 "trtype": "$TEST_TRANSPORT", 00:20:16.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.113 "adrfam": "ipv4", 00:20:16.113 "trsvcid": "$NVMF_PORT", 00:20:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.113 "hdgst": ${hdgst:-false}, 00:20:16.113 "ddgst": ${ddgst:-false} 00:20:16.113 }, 00:20:16.113 "method": "bdev_nvme_attach_controller" 00:20:16.113 } 00:20:16.113 EOF 00:20:16.113 )") 00:20:16.113 21:35:38 -- nvmf/common.sh@543 -- # cat 00:20:16.113 [2024-04-24 21:35:38.779347] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:20:16.113 [2024-04-24 21:35:38.779406] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:16.113 21:35:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:16.113 21:35:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:16.113 { 00:20:16.113 "params": { 00:20:16.113 "name": "Nvme$subsystem", 00:20:16.113 "trtype": "$TEST_TRANSPORT", 00:20:16.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.113 "adrfam": "ipv4", 00:20:16.113 "trsvcid": "$NVMF_PORT", 00:20:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.113 "hdgst": ${hdgst:-false}, 00:20:16.113 "ddgst": ${ddgst:-false} 00:20:16.113 }, 00:20:16.113 "method": "bdev_nvme_attach_controller" 00:20:16.113 } 00:20:16.113 EOF 00:20:16.113 )") 00:20:16.113 21:35:38 -- nvmf/common.sh@543 -- # cat 00:20:16.113 21:35:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:16.113 21:35:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:16.113 { 00:20:16.113 "params": { 00:20:16.113 "name": "Nvme$subsystem", 00:20:16.113 "trtype": "$TEST_TRANSPORT", 00:20:16.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.113 "adrfam": "ipv4", 00:20:16.113 "trsvcid": "$NVMF_PORT", 00:20:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.113 "hdgst": ${hdgst:-false}, 00:20:16.113 "ddgst": ${ddgst:-false} 00:20:16.113 }, 00:20:16.113 "method": "bdev_nvme_attach_controller" 00:20:16.113 } 00:20:16.113 EOF 00:20:16.113 )") 00:20:16.113 21:35:38 -- nvmf/common.sh@543 -- # cat 00:20:16.113 21:35:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:16.113 21:35:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:16.113 { 00:20:16.113 "params": { 00:20:16.113 "name": "Nvme$subsystem", 00:20:16.113 "trtype": "$TEST_TRANSPORT", 00:20:16.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.113 "adrfam": "ipv4", 00:20:16.113 "trsvcid": "$NVMF_PORT", 00:20:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.113 "hdgst": ${hdgst:-false}, 00:20:16.113 "ddgst": ${ddgst:-false} 00:20:16.113 }, 00:20:16.113 "method": "bdev_nvme_attach_controller" 00:20:16.113 } 00:20:16.113 EOF 00:20:16.113 )") 00:20:16.113 21:35:38 -- nvmf/common.sh@543 -- # cat 00:20:16.113 21:35:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:16.113 21:35:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:16.113 { 00:20:16.113 "params": { 00:20:16.113 "name": "Nvme$subsystem", 00:20:16.113 "trtype": "$TEST_TRANSPORT", 00:20:16.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.113 "adrfam": "ipv4", 00:20:16.113 "trsvcid": "$NVMF_PORT", 00:20:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.113 "hdgst": ${hdgst:-false}, 00:20:16.113 "ddgst": ${ddgst:-false} 00:20:16.113 }, 00:20:16.113 "method": "bdev_nvme_attach_controller" 00:20:16.113 } 00:20:16.113 EOF 00:20:16.113 )") 00:20:16.113 21:35:38 -- nvmf/common.sh@543 -- # cat 00:20:16.113 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.113 21:35:38 -- nvmf/common.sh@545 -- # jq . 00:20:16.113 21:35:38 -- nvmf/common.sh@546 -- # IFS=, 00:20:16.113 21:35:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:16.113 "params": { 00:20:16.113 "name": "Nvme1", 00:20:16.113 "trtype": "tcp", 00:20:16.113 "traddr": "10.0.0.2", 00:20:16.113 "adrfam": "ipv4", 00:20:16.113 "trsvcid": "4420", 00:20:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.113 "hdgst": false, 00:20:16.113 "ddgst": false 00:20:16.113 }, 00:20:16.113 "method": "bdev_nvme_attach_controller" 00:20:16.113 },{ 00:20:16.113 "params": { 00:20:16.113 "name": "Nvme2", 00:20:16.113 "trtype": "tcp", 00:20:16.113 "traddr": "10.0.0.2", 00:20:16.113 "adrfam": "ipv4", 00:20:16.113 "trsvcid": "4420", 00:20:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:16.113 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:16.113 "hdgst": false, 00:20:16.113 "ddgst": false 00:20:16.113 }, 00:20:16.113 "method": "bdev_nvme_attach_controller" 00:20:16.113 },{ 00:20:16.113 "params": { 00:20:16.113 "name": "Nvme3", 00:20:16.113 "trtype": "tcp", 00:20:16.113 "traddr": "10.0.0.2", 00:20:16.113 "adrfam": "ipv4", 00:20:16.113 "trsvcid": "4420", 00:20:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:16.113 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:16.113 "hdgst": false, 00:20:16.113 "ddgst": false 00:20:16.113 }, 00:20:16.113 "method": "bdev_nvme_attach_controller" 00:20:16.113 },{ 00:20:16.113 "params": { 00:20:16.113 "name": "Nvme4", 00:20:16.113 "trtype": "tcp", 00:20:16.113 "traddr": "10.0.0.2", 00:20:16.113 "adrfam": "ipv4", 00:20:16.113 "trsvcid": "4420", 00:20:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:16.113 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:16.113 "hdgst": false, 00:20:16.113 "ddgst": false 00:20:16.113 }, 00:20:16.113 "method": "bdev_nvme_attach_controller" 00:20:16.113 },{ 00:20:16.113 "params": { 00:20:16.113 "name": "Nvme5", 00:20:16.113 "trtype": "tcp", 00:20:16.113 "traddr": "10.0.0.2", 00:20:16.113 "adrfam": "ipv4", 00:20:16.113 "trsvcid": "4420", 00:20:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:16.113 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:16.113 "hdgst": false, 00:20:16.113 "ddgst": false 00:20:16.113 }, 00:20:16.113 "method": "bdev_nvme_attach_controller" 00:20:16.113 },{ 00:20:16.113 "params": { 00:20:16.113 "name": "Nvme6", 00:20:16.113 "trtype": "tcp", 00:20:16.113 "traddr": "10.0.0.2", 00:20:16.113 "adrfam": "ipv4", 00:20:16.113 "trsvcid": "4420", 00:20:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:16.113 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:16.113 "hdgst": false, 00:20:16.113 "ddgst": false 00:20:16.113 }, 00:20:16.113 "method": "bdev_nvme_attach_controller" 00:20:16.113 },{ 00:20:16.113 "params": { 00:20:16.113 "name": "Nvme7", 00:20:16.113 "trtype": "tcp", 00:20:16.113 "traddr": "10.0.0.2", 00:20:16.113 "adrfam": "ipv4", 00:20:16.113 "trsvcid": "4420", 00:20:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:16.113 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:16.113 "hdgst": false, 00:20:16.113 "ddgst": false 00:20:16.113 }, 00:20:16.113 "method": "bdev_nvme_attach_controller" 00:20:16.113 },{ 00:20:16.113 "params": { 00:20:16.113 "name": "Nvme8", 00:20:16.113 "trtype": "tcp", 00:20:16.113 "traddr": "10.0.0.2", 00:20:16.113 "adrfam": "ipv4", 00:20:16.113 "trsvcid": "4420", 00:20:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:16.113 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:16.113 "hdgst": false, 00:20:16.113 "ddgst": false 00:20:16.113 }, 00:20:16.113 "method": "bdev_nvme_attach_controller" 00:20:16.113 },{ 00:20:16.113 "params": { 00:20:16.113 "name": "Nvme9", 00:20:16.113 "trtype": "tcp", 00:20:16.113 "traddr": "10.0.0.2", 00:20:16.113 "adrfam": "ipv4", 00:20:16.113 "trsvcid": "4420", 00:20:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:16.113 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:16.113 "hdgst": false, 00:20:16.113 "ddgst": false 00:20:16.113 }, 00:20:16.113 "method": "bdev_nvme_attach_controller" 00:20:16.113 },{ 00:20:16.113 "params": { 00:20:16.113 "name": "Nvme10", 00:20:16.113 "trtype": "tcp", 00:20:16.113 "traddr": "10.0.0.2", 00:20:16.113 "adrfam": "ipv4", 00:20:16.113 "trsvcid": "4420", 00:20:16.113 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:16.113 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:16.113 "hdgst": false, 00:20:16.113 "ddgst": false 00:20:16.113 }, 00:20:16.113 "method": "bdev_nvme_attach_controller" 00:20:16.113 }' 00:20:16.113 [2024-04-24 21:35:38.851723] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.113 [2024-04-24 21:35:38.918934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.493 21:35:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:17.493 21:35:40 -- common/autotest_common.sh@850 -- # return 0 00:20:17.493 21:35:40 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:17.493 21:35:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.493 21:35:40 -- common/autotest_common.sh@10 -- # set +x 00:20:17.493 21:35:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.493 21:35:40 -- target/shutdown.sh@83 -- # kill -9 2908464 00:20:17.493 21:35:40 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:17.493 21:35:40 -- target/shutdown.sh@87 -- # sleep 1 00:20:18.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2908464 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:18.874 21:35:41 -- target/shutdown.sh@88 -- # kill -0 2908146 00:20:18.874 21:35:41 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:18.874 21:35:41 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:18.874 21:35:41 -- nvmf/common.sh@521 -- # config=() 00:20:18.874 21:35:41 -- nvmf/common.sh@521 -- # local subsystem config 00:20:18.874 21:35:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.874 21:35:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.874 { 00:20:18.874 "params": { 00:20:18.874 "name": "Nvme$subsystem", 00:20:18.874 "trtype": "$TEST_TRANSPORT", 00:20:18.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.874 "adrfam": "ipv4", 00:20:18.874 "trsvcid": "$NVMF_PORT", 00:20:18.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.874 "hdgst": ${hdgst:-false}, 00:20:18.874 "ddgst": ${ddgst:-false} 00:20:18.874 }, 00:20:18.874 "method": "bdev_nvme_attach_controller" 00:20:18.874 } 00:20:18.874 EOF 00:20:18.874 )") 00:20:18.874 21:35:41 -- nvmf/common.sh@543 -- # cat 00:20:18.874 21:35:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.874 21:35:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.874 { 00:20:18.874 "params": { 00:20:18.874 "name": "Nvme$subsystem", 00:20:18.874 "trtype": "$TEST_TRANSPORT", 00:20:18.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.874 "adrfam": "ipv4", 00:20:18.874 "trsvcid": "$NVMF_PORT", 00:20:18.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.874 "hdgst": ${hdgst:-false}, 00:20:18.874 "ddgst": ${ddgst:-false} 00:20:18.874 }, 00:20:18.874 "method": "bdev_nvme_attach_controller" 00:20:18.874 } 00:20:18.874 EOF 00:20:18.874 )") 00:20:18.874 21:35:41 -- nvmf/common.sh@543 -- # cat 00:20:18.874 21:35:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.874 21:35:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.874 { 00:20:18.874 "params": { 00:20:18.874 "name": "Nvme$subsystem", 00:20:18.874 "trtype": "$TEST_TRANSPORT", 00:20:18.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.874 "adrfam": "ipv4", 00:20:18.874 "trsvcid": "$NVMF_PORT", 00:20:18.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.874 "hdgst": ${hdgst:-false}, 00:20:18.874 "ddgst": ${ddgst:-false} 00:20:18.874 }, 00:20:18.874 "method": "bdev_nvme_attach_controller" 00:20:18.874 } 00:20:18.874 EOF 00:20:18.874 )") 00:20:18.874 21:35:41 -- nvmf/common.sh@543 -- # cat 00:20:18.874 21:35:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.874 21:35:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.874 { 00:20:18.874 "params": { 00:20:18.874 "name": "Nvme$subsystem", 00:20:18.874 "trtype": "$TEST_TRANSPORT", 00:20:18.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.874 "adrfam": "ipv4", 00:20:18.874 "trsvcid": "$NVMF_PORT", 00:20:18.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.874 "hdgst": ${hdgst:-false}, 00:20:18.874 "ddgst": ${ddgst:-false} 00:20:18.874 }, 00:20:18.874 "method": "bdev_nvme_attach_controller" 00:20:18.874 } 00:20:18.874 EOF 00:20:18.874 )") 00:20:18.874 21:35:41 -- nvmf/common.sh@543 -- # cat 00:20:18.874 21:35:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.874 21:35:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.874 { 00:20:18.874 "params": { 00:20:18.874 "name": "Nvme$subsystem", 00:20:18.874 "trtype": "$TEST_TRANSPORT", 00:20:18.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.874 "adrfam": "ipv4", 00:20:18.874 "trsvcid": "$NVMF_PORT", 00:20:18.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.874 "hdgst": ${hdgst:-false}, 00:20:18.874 "ddgst": ${ddgst:-false} 00:20:18.874 }, 00:20:18.874 "method": "bdev_nvme_attach_controller" 00:20:18.874 } 00:20:18.874 EOF 00:20:18.874 )") 00:20:18.874 21:35:41 -- nvmf/common.sh@543 -- # cat 00:20:18.874 [2024-04-24 21:35:41.420204] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:20:18.874 [2024-04-24 21:35:41.420254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2908910 ] 00:20:18.874 21:35:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.874 21:35:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.874 { 00:20:18.874 "params": { 00:20:18.874 "name": "Nvme$subsystem", 00:20:18.874 "trtype": "$TEST_TRANSPORT", 00:20:18.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.874 "adrfam": "ipv4", 00:20:18.874 "trsvcid": "$NVMF_PORT", 00:20:18.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.874 "hdgst": ${hdgst:-false}, 00:20:18.874 "ddgst": ${ddgst:-false} 00:20:18.874 }, 00:20:18.874 "method": "bdev_nvme_attach_controller" 00:20:18.874 } 00:20:18.874 EOF 00:20:18.874 )") 00:20:18.874 21:35:41 -- nvmf/common.sh@543 -- # cat 00:20:18.874 21:35:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.874 21:35:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.874 { 00:20:18.874 "params": { 00:20:18.874 "name": "Nvme$subsystem", 00:20:18.874 "trtype": "$TEST_TRANSPORT", 00:20:18.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.874 "adrfam": "ipv4", 00:20:18.874 "trsvcid": "$NVMF_PORT", 00:20:18.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.874 "hdgst": ${hdgst:-false}, 00:20:18.874 "ddgst": ${ddgst:-false} 00:20:18.874 }, 00:20:18.874 "method": "bdev_nvme_attach_controller" 00:20:18.874 } 00:20:18.874 EOF 00:20:18.874 )") 00:20:18.874 21:35:41 -- nvmf/common.sh@543 -- # cat 00:20:18.874 21:35:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.874 21:35:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.874 { 00:20:18.874 "params": { 00:20:18.874 "name": "Nvme$subsystem", 00:20:18.874 "trtype": "$TEST_TRANSPORT", 00:20:18.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.874 "adrfam": "ipv4", 00:20:18.874 "trsvcid": "$NVMF_PORT", 00:20:18.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.874 "hdgst": ${hdgst:-false}, 00:20:18.874 "ddgst": ${ddgst:-false} 00:20:18.874 }, 00:20:18.874 "method": "bdev_nvme_attach_controller" 00:20:18.874 } 00:20:18.874 EOF 00:20:18.874 )") 00:20:18.874 21:35:41 -- nvmf/common.sh@543 -- # cat 00:20:18.874 21:35:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.874 21:35:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.874 { 00:20:18.874 "params": { 00:20:18.874 "name": "Nvme$subsystem", 00:20:18.874 "trtype": "$TEST_TRANSPORT", 00:20:18.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.874 "adrfam": "ipv4", 00:20:18.874 "trsvcid": "$NVMF_PORT", 00:20:18.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.875 "hdgst": ${hdgst:-false}, 00:20:18.875 "ddgst": ${ddgst:-false} 00:20:18.875 }, 00:20:18.875 "method": "bdev_nvme_attach_controller" 00:20:18.875 } 00:20:18.875 EOF 00:20:18.875 )") 00:20:18.875 21:35:41 -- nvmf/common.sh@543 -- # cat 00:20:18.875 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.875 21:35:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:18.875 21:35:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:18.875 { 00:20:18.875 "params": { 00:20:18.875 "name": "Nvme$subsystem", 00:20:18.875 "trtype": "$TEST_TRANSPORT", 00:20:18.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.875 "adrfam": "ipv4", 00:20:18.875 "trsvcid": "$NVMF_PORT", 00:20:18.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.875 "hdgst": ${hdgst:-false}, 00:20:18.875 "ddgst": ${ddgst:-false} 00:20:18.875 }, 00:20:18.875 "method": "bdev_nvme_attach_controller" 00:20:18.875 } 00:20:18.875 EOF 00:20:18.875 )") 00:20:18.875 21:35:41 -- nvmf/common.sh@543 -- # cat 00:20:18.875 21:35:41 -- nvmf/common.sh@545 -- # jq . 00:20:18.875 21:35:41 -- nvmf/common.sh@546 -- # IFS=, 00:20:18.875 21:35:41 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:18.875 "params": { 00:20:18.875 "name": "Nvme1", 00:20:18.875 "trtype": "tcp", 00:20:18.875 "traddr": "10.0.0.2", 00:20:18.875 "adrfam": "ipv4", 00:20:18.875 "trsvcid": "4420", 00:20:18.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.875 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.875 "hdgst": false, 00:20:18.875 "ddgst": false 00:20:18.875 }, 00:20:18.875 "method": "bdev_nvme_attach_controller" 00:20:18.875 },{ 00:20:18.875 "params": { 00:20:18.875 "name": "Nvme2", 00:20:18.875 "trtype": "tcp", 00:20:18.875 "traddr": "10.0.0.2", 00:20:18.875 "adrfam": "ipv4", 00:20:18.875 "trsvcid": "4420", 00:20:18.875 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:18.875 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:18.875 "hdgst": false, 00:20:18.875 "ddgst": false 00:20:18.875 }, 00:20:18.875 "method": "bdev_nvme_attach_controller" 00:20:18.875 },{ 00:20:18.875 "params": { 00:20:18.875 "name": "Nvme3", 00:20:18.875 "trtype": "tcp", 00:20:18.875 "traddr": "10.0.0.2", 00:20:18.875 "adrfam": "ipv4", 00:20:18.875 "trsvcid": "4420", 00:20:18.875 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:18.875 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:18.875 "hdgst": false, 00:20:18.875 "ddgst": false 00:20:18.875 }, 00:20:18.875 "method": "bdev_nvme_attach_controller" 00:20:18.875 },{ 00:20:18.875 "params": { 00:20:18.875 "name": "Nvme4", 00:20:18.875 "trtype": "tcp", 00:20:18.875 "traddr": "10.0.0.2", 00:20:18.875 "adrfam": "ipv4", 00:20:18.875 "trsvcid": "4420", 00:20:18.875 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:18.875 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:18.875 "hdgst": false, 00:20:18.875 "ddgst": false 00:20:18.875 }, 00:20:18.875 "method": "bdev_nvme_attach_controller" 00:20:18.875 },{ 00:20:18.875 "params": { 00:20:18.875 "name": "Nvme5", 00:20:18.875 "trtype": "tcp", 00:20:18.875 "traddr": "10.0.0.2", 00:20:18.875 "adrfam": "ipv4", 00:20:18.875 "trsvcid": "4420", 00:20:18.875 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:18.875 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:18.875 "hdgst": false, 00:20:18.875 "ddgst": false 00:20:18.875 }, 00:20:18.875 "method": "bdev_nvme_attach_controller" 00:20:18.875 },{ 00:20:18.875 "params": { 00:20:18.875 "name": "Nvme6", 00:20:18.875 "trtype": "tcp", 00:20:18.875 "traddr": "10.0.0.2", 00:20:18.875 "adrfam": "ipv4", 00:20:18.875 "trsvcid": "4420", 00:20:18.875 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:18.875 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:18.875 "hdgst": false, 00:20:18.875 "ddgst": false 00:20:18.875 }, 00:20:18.875 "method": "bdev_nvme_attach_controller" 00:20:18.875 },{ 00:20:18.875 "params": { 00:20:18.875 "name": "Nvme7", 00:20:18.875 "trtype": "tcp", 00:20:18.875 "traddr": "10.0.0.2", 00:20:18.875 "adrfam": "ipv4", 00:20:18.875 "trsvcid": "4420", 00:20:18.875 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:18.875 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:18.875 "hdgst": false, 00:20:18.875 "ddgst": false 00:20:18.875 }, 00:20:18.875 "method": "bdev_nvme_attach_controller" 00:20:18.875 },{ 00:20:18.875 "params": { 00:20:18.875 "name": "Nvme8", 00:20:18.875 "trtype": "tcp", 00:20:18.875 "traddr": "10.0.0.2", 00:20:18.875 "adrfam": "ipv4", 00:20:18.875 "trsvcid": "4420", 00:20:18.875 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:18.875 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:18.875 "hdgst": false, 00:20:18.875 "ddgst": false 00:20:18.875 }, 00:20:18.875 "method": "bdev_nvme_attach_controller" 00:20:18.875 },{ 00:20:18.875 "params": { 00:20:18.875 "name": "Nvme9", 00:20:18.875 "trtype": "tcp", 00:20:18.875 "traddr": "10.0.0.2", 00:20:18.875 "adrfam": "ipv4", 00:20:18.875 "trsvcid": "4420", 00:20:18.875 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:18.875 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:18.875 "hdgst": false, 00:20:18.875 "ddgst": false 00:20:18.875 }, 00:20:18.875 "method": "bdev_nvme_attach_controller" 00:20:18.875 },{ 00:20:18.875 "params": { 00:20:18.875 "name": "Nvme10", 00:20:18.875 "trtype": "tcp", 00:20:18.875 "traddr": "10.0.0.2", 00:20:18.875 "adrfam": "ipv4", 00:20:18.875 "trsvcid": "4420", 00:20:18.875 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:18.875 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:18.875 "hdgst": false, 00:20:18.875 "ddgst": false 00:20:18.875 }, 00:20:18.875 "method": "bdev_nvme_attach_controller" 00:20:18.875 }' 00:20:18.875 [2024-04-24 21:35:41.492888] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.875 [2024-04-24 21:35:41.561823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.259 Running I/O for 1 seconds... 00:20:21.197 00:20:21.197 Latency(us) 00:20:21.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.197 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.197 Verification LBA range: start 0x0 length 0x400 00:20:21.197 Nvme1n1 : 1.11 172.57 10.79 0.00 0.00 367498.04 21600.67 323800.27 00:20:21.197 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.197 Verification LBA range: start 0x0 length 0x400 00:20:21.197 Nvme2n1 : 1.14 281.20 17.58 0.00 0.00 222708.70 25794.97 213070.64 00:20:21.197 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.197 Verification LBA range: start 0x0 length 0x400 00:20:21.197 Nvme3n1 : 1.12 229.18 14.32 0.00 0.00 269396.99 20132.66 258369.13 00:20:21.197 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.197 Verification LBA range: start 0x0 length 0x400 00:20:21.197 Nvme4n1 : 1.13 339.09 21.19 0.00 0.00 178674.86 6658.46 194615.71 00:20:21.197 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.197 Verification LBA range: start 0x0 length 0x400 00:20:21.197 Nvme5n1 : 1.15 224.98 14.06 0.00 0.00 267185.34 2608.33 296956.72 00:20:21.197 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.197 Verification LBA range: start 0x0 length 0x400 00:20:21.197 Nvme6n1 : 1.14 281.90 17.62 0.00 0.00 210280.45 20027.80 214748.36 00:20:21.197 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.197 Verification LBA range: start 0x0 length 0x400 00:20:21.197 Nvme7n1 : 1.11 288.95 18.06 0.00 0.00 201606.92 20237.52 199648.87 00:20:21.197 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.197 Verification LBA range: start 0x0 length 0x400 00:20:21.197 Nvme8n1 : 1.16 332.07 20.75 0.00 0.00 173740.99 14155.78 196293.43 00:20:21.197 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.197 Verification LBA range: start 0x0 length 0x400 00:20:21.197 Nvme9n1 : 1.13 284.29 17.77 0.00 0.00 199406.22 20132.66 198810.01 00:20:21.197 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.197 Verification LBA range: start 0x0 length 0x400 00:20:21.197 Nvme10n1 : 1.15 278.58 17.41 0.00 0.00 201100.66 20866.66 263402.29 00:20:21.197 =================================================================================================================== 00:20:21.197 Total : 2712.81 169.55 0.00 0.00 219605.49 2608.33 323800.27 00:20:21.456 21:35:44 -- target/shutdown.sh@94 -- # stoptarget 00:20:21.456 21:35:44 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:21.456 21:35:44 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:21.456 21:35:44 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:21.456 21:35:44 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:21.456 21:35:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:21.456 21:35:44 -- nvmf/common.sh@117 -- # sync 00:20:21.456 21:35:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:21.456 21:35:44 -- nvmf/common.sh@120 -- # set +e 00:20:21.456 21:35:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:21.456 21:35:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:21.456 rmmod nvme_tcp 00:20:21.456 rmmod nvme_fabrics 00:20:21.716 rmmod nvme_keyring 00:20:21.716 21:35:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:21.716 21:35:44 -- nvmf/common.sh@124 -- # set -e 00:20:21.716 21:35:44 -- nvmf/common.sh@125 -- # return 0 00:20:21.716 21:35:44 -- nvmf/common.sh@478 -- # '[' -n 2908146 ']' 00:20:21.716 21:35:44 -- nvmf/common.sh@479 -- # killprocess 2908146 00:20:21.716 21:35:44 -- common/autotest_common.sh@936 -- # '[' -z 2908146 ']' 00:20:21.716 21:35:44 -- common/autotest_common.sh@940 -- # kill -0 2908146 00:20:21.716 21:35:44 -- common/autotest_common.sh@941 -- # uname 00:20:21.716 21:35:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:21.716 21:35:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2908146 00:20:21.716 21:35:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:21.716 21:35:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:21.716 21:35:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2908146' 00:20:21.716 killing process with pid 2908146 00:20:21.716 21:35:44 -- common/autotest_common.sh@955 -- # kill 2908146 00:20:21.716 21:35:44 -- common/autotest_common.sh@960 -- # wait 2908146 00:20:21.976 21:35:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:21.976 21:35:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:21.976 21:35:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:21.976 21:35:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.976 21:35:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:21.976 21:35:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.976 21:35:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.976 21:35:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.515 21:35:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:24.515 00:20:24.515 real 0m16.348s 00:20:24.515 user 0m34.825s 00:20:24.515 sys 0m6.858s 00:20:24.515 21:35:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:24.515 21:35:46 -- common/autotest_common.sh@10 -- # set +x 00:20:24.515 ************************************ 00:20:24.515 END TEST nvmf_shutdown_tc1 00:20:24.515 ************************************ 00:20:24.515 21:35:46 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:24.515 21:35:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:24.515 21:35:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:24.515 21:35:46 -- common/autotest_common.sh@10 -- # set +x 00:20:24.515 ************************************ 00:20:24.515 START TEST nvmf_shutdown_tc2 00:20:24.515 ************************************ 00:20:24.515 21:35:47 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:20:24.515 21:35:47 -- target/shutdown.sh@99 -- # starttarget 00:20:24.515 21:35:47 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:24.515 21:35:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:24.515 21:35:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.515 21:35:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:24.515 21:35:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:24.515 21:35:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:24.515 21:35:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.515 21:35:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.515 21:35:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.515 21:35:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:24.515 21:35:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:24.515 21:35:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:24.515 21:35:47 -- common/autotest_common.sh@10 -- # set +x 00:20:24.515 21:35:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:24.515 21:35:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:24.515 21:35:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:24.515 21:35:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:24.515 21:35:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:24.515 21:35:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:24.515 21:35:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:24.515 21:35:47 -- nvmf/common.sh@295 -- # net_devs=() 00:20:24.515 21:35:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:24.515 21:35:47 -- nvmf/common.sh@296 -- # e810=() 00:20:24.515 21:35:47 -- nvmf/common.sh@296 -- # local -ga e810 00:20:24.515 21:35:47 -- nvmf/common.sh@297 -- # x722=() 00:20:24.515 21:35:47 -- nvmf/common.sh@297 -- # local -ga x722 00:20:24.515 21:35:47 -- nvmf/common.sh@298 -- # mlx=() 00:20:24.515 21:35:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:24.515 21:35:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.515 21:35:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.515 21:35:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.515 21:35:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.515 21:35:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.515 21:35:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.515 21:35:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.515 21:35:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.515 21:35:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.515 21:35:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.515 21:35:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.515 21:35:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:24.515 21:35:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:24.515 21:35:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:24.515 21:35:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:24.515 21:35:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:24.515 21:35:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:24.515 21:35:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.515 21:35:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:24.515 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:24.515 21:35:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.515 21:35:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.515 21:35:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.515 21:35:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.515 21:35:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.515 21:35:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.515 21:35:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:24.515 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:24.515 21:35:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.515 21:35:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.515 21:35:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.515 21:35:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.515 21:35:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.515 21:35:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:24.515 21:35:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:24.515 21:35:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:24.515 21:35:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.515 21:35:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.515 21:35:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:24.515 21:35:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.515 21:35:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:24.515 Found net devices under 0000:af:00.0: cvl_0_0 00:20:24.515 21:35:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.515 21:35:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.515 21:35:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.515 21:35:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:24.515 21:35:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.515 21:35:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:24.515 Found net devices under 0000:af:00.1: cvl_0_1 00:20:24.515 21:35:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.515 21:35:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:24.515 21:35:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:24.515 21:35:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:24.515 21:35:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:24.515 21:35:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:24.515 21:35:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.515 21:35:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.515 21:35:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:24.515 21:35:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:24.515 21:35:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:24.515 21:35:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:24.515 21:35:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:24.515 21:35:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:24.515 21:35:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.515 21:35:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:24.515 21:35:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:24.515 21:35:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:24.515 21:35:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:24.515 21:35:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:24.515 21:35:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:24.515 21:35:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:24.515 21:35:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:24.515 21:35:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:24.515 21:35:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:24.515 21:35:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:24.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:20:24.775 00:20:24.775 --- 10.0.0.2 ping statistics --- 00:20:24.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.775 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:20:24.775 21:35:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:24.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:20:24.775 00:20:24.775 --- 10.0.0.1 ping statistics --- 00:20:24.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.775 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:20:24.775 21:35:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.775 21:35:47 -- nvmf/common.sh@411 -- # return 0 00:20:24.775 21:35:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:24.775 21:35:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.775 21:35:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:24.775 21:35:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:24.775 21:35:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.776 21:35:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:24.776 21:35:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:24.776 21:35:47 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:24.776 21:35:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:24.776 21:35:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:24.776 21:35:47 -- common/autotest_common.sh@10 -- # set +x 00:20:24.776 21:35:47 -- nvmf/common.sh@470 -- # nvmfpid=2909964 00:20:24.776 21:35:47 -- nvmf/common.sh@471 -- # waitforlisten 2909964 00:20:24.776 21:35:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:24.776 21:35:47 -- common/autotest_common.sh@817 -- # '[' -z 2909964 ']' 00:20:24.776 21:35:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.776 21:35:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:24.776 21:35:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.776 21:35:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:24.776 21:35:47 -- common/autotest_common.sh@10 -- # set +x 00:20:24.776 [2024-04-24 21:35:47.516658] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:20:24.776 [2024-04-24 21:35:47.516705] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.776 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.776 [2024-04-24 21:35:47.590230] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:25.035 [2024-04-24 21:35:47.663948] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.035 [2024-04-24 21:35:47.663984] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.035 [2024-04-24 21:35:47.663994] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.035 [2024-04-24 21:35:47.664002] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.035 [2024-04-24 21:35:47.664009] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.035 [2024-04-24 21:35:47.664112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.035 [2024-04-24 21:35:47.664202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:25.035 [2024-04-24 21:35:47.664320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.035 [2024-04-24 21:35:47.664321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:25.604 21:35:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:25.604 21:35:48 -- common/autotest_common.sh@850 -- # return 0 00:20:25.604 21:35:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:25.604 21:35:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:25.604 21:35:48 -- common/autotest_common.sh@10 -- # set +x 00:20:25.604 21:35:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.604 21:35:48 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:25.604 21:35:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.604 21:35:48 -- common/autotest_common.sh@10 -- # set +x 00:20:25.604 [2024-04-24 21:35:48.368188] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.604 21:35:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.604 21:35:48 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:25.604 21:35:48 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:25.604 21:35:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:25.604 21:35:48 -- common/autotest_common.sh@10 -- # set +x 00:20:25.604 21:35:48 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:25.604 21:35:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:25.604 21:35:48 -- target/shutdown.sh@28 -- # cat 00:20:25.604 21:35:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:25.604 21:35:48 -- target/shutdown.sh@28 -- # cat 00:20:25.604 21:35:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:25.604 21:35:48 -- target/shutdown.sh@28 -- # cat 00:20:25.604 21:35:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:25.604 21:35:48 -- target/shutdown.sh@28 -- # cat 00:20:25.604 21:35:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:25.604 21:35:48 -- target/shutdown.sh@28 -- # cat 00:20:25.604 21:35:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:25.604 21:35:48 -- target/shutdown.sh@28 -- # cat 00:20:25.604 21:35:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:25.604 21:35:48 -- target/shutdown.sh@28 -- # cat 00:20:25.604 21:35:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:25.604 21:35:48 -- target/shutdown.sh@28 -- # cat 00:20:25.604 21:35:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:25.604 21:35:48 -- target/shutdown.sh@28 -- # cat 00:20:25.604 21:35:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:25.604 21:35:48 -- target/shutdown.sh@28 -- # cat 00:20:25.604 21:35:48 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:25.604 21:35:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.604 21:35:48 -- common/autotest_common.sh@10 -- # set +x 00:20:25.604 Malloc1 00:20:25.604 [2024-04-24 21:35:48.479145] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.864 Malloc2 00:20:25.864 Malloc3 00:20:25.864 Malloc4 00:20:25.864 Malloc5 00:20:25.864 Malloc6 00:20:25.864 Malloc7 00:20:26.124 Malloc8 00:20:26.124 Malloc9 00:20:26.124 Malloc10 00:20:26.124 21:35:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.124 21:35:48 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:26.124 21:35:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:26.124 21:35:48 -- common/autotest_common.sh@10 -- # set +x 00:20:26.124 21:35:48 -- target/shutdown.sh@103 -- # perfpid=2910269 00:20:26.124 21:35:48 -- target/shutdown.sh@104 -- # waitforlisten 2910269 /var/tmp/bdevperf.sock 00:20:26.124 21:35:48 -- common/autotest_common.sh@817 -- # '[' -z 2910269 ']' 00:20:26.124 21:35:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.124 21:35:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:26.124 21:35:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.124 21:35:48 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:26.124 21:35:48 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:26.124 21:35:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:26.124 21:35:48 -- common/autotest_common.sh@10 -- # set +x 00:20:26.124 21:35:48 -- nvmf/common.sh@521 -- # config=() 00:20:26.124 21:35:48 -- nvmf/common.sh@521 -- # local subsystem config 00:20:26.124 21:35:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.124 21:35:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.124 { 00:20:26.124 "params": { 00:20:26.124 "name": "Nvme$subsystem", 00:20:26.124 "trtype": "$TEST_TRANSPORT", 00:20:26.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.124 "adrfam": "ipv4", 00:20:26.124 "trsvcid": "$NVMF_PORT", 00:20:26.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.124 "hdgst": ${hdgst:-false}, 00:20:26.124 "ddgst": ${ddgst:-false} 00:20:26.124 }, 00:20:26.124 "method": "bdev_nvme_attach_controller" 00:20:26.124 } 00:20:26.124 EOF 00:20:26.124 )") 00:20:26.124 21:35:48 -- nvmf/common.sh@543 -- # cat 00:20:26.124 21:35:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.124 21:35:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.124 { 00:20:26.124 "params": { 00:20:26.124 "name": "Nvme$subsystem", 00:20:26.124 "trtype": "$TEST_TRANSPORT", 00:20:26.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.124 "adrfam": "ipv4", 00:20:26.124 "trsvcid": "$NVMF_PORT", 00:20:26.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.124 "hdgst": ${hdgst:-false}, 00:20:26.124 "ddgst": ${ddgst:-false} 00:20:26.124 }, 00:20:26.124 "method": "bdev_nvme_attach_controller" 00:20:26.124 } 00:20:26.124 EOF 00:20:26.124 )") 00:20:26.124 21:35:48 -- nvmf/common.sh@543 -- # cat 00:20:26.124 21:35:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.124 21:35:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.124 { 00:20:26.124 "params": { 00:20:26.124 "name": "Nvme$subsystem", 00:20:26.124 "trtype": "$TEST_TRANSPORT", 00:20:26.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.124 "adrfam": "ipv4", 00:20:26.124 "trsvcid": "$NVMF_PORT", 00:20:26.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.124 "hdgst": ${hdgst:-false}, 00:20:26.124 "ddgst": ${ddgst:-false} 00:20:26.124 }, 00:20:26.124 "method": "bdev_nvme_attach_controller" 00:20:26.124 } 00:20:26.124 EOF 00:20:26.124 )") 00:20:26.124 21:35:48 -- nvmf/common.sh@543 -- # cat 00:20:26.124 21:35:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.124 21:35:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.124 { 00:20:26.124 "params": { 00:20:26.124 "name": "Nvme$subsystem", 00:20:26.124 "trtype": "$TEST_TRANSPORT", 00:20:26.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.124 "adrfam": "ipv4", 00:20:26.124 "trsvcid": "$NVMF_PORT", 00:20:26.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.124 "hdgst": ${hdgst:-false}, 00:20:26.124 "ddgst": ${ddgst:-false} 00:20:26.124 }, 00:20:26.124 "method": "bdev_nvme_attach_controller" 00:20:26.124 } 00:20:26.124 EOF 00:20:26.124 )") 00:20:26.124 21:35:48 -- nvmf/common.sh@543 -- # cat 00:20:26.124 21:35:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.124 21:35:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.124 { 00:20:26.124 "params": { 00:20:26.124 "name": "Nvme$subsystem", 00:20:26.124 "trtype": "$TEST_TRANSPORT", 00:20:26.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.124 "adrfam": "ipv4", 00:20:26.124 "trsvcid": "$NVMF_PORT", 00:20:26.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.124 "hdgst": ${hdgst:-false}, 00:20:26.124 "ddgst": ${ddgst:-false} 00:20:26.124 }, 00:20:26.124 "method": "bdev_nvme_attach_controller" 00:20:26.124 } 00:20:26.124 EOF 00:20:26.124 )") 00:20:26.124 21:35:48 -- nvmf/common.sh@543 -- # cat 00:20:26.124 21:35:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.124 21:35:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.124 { 00:20:26.124 "params": { 00:20:26.124 "name": "Nvme$subsystem", 00:20:26.124 "trtype": "$TEST_TRANSPORT", 00:20:26.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.124 "adrfam": "ipv4", 00:20:26.124 "trsvcid": "$NVMF_PORT", 00:20:26.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.124 "hdgst": ${hdgst:-false}, 00:20:26.124 "ddgst": ${ddgst:-false} 00:20:26.124 }, 00:20:26.124 "method": "bdev_nvme_attach_controller" 00:20:26.124 } 00:20:26.124 EOF 00:20:26.124 )") 00:20:26.124 [2024-04-24 21:35:48.956529] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:20:26.124 [2024-04-24 21:35:48.956582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2910269 ] 00:20:26.124 21:35:48 -- nvmf/common.sh@543 -- # cat 00:20:26.124 21:35:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.124 21:35:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.124 { 00:20:26.124 "params": { 00:20:26.124 "name": "Nvme$subsystem", 00:20:26.124 "trtype": "$TEST_TRANSPORT", 00:20:26.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.124 "adrfam": "ipv4", 00:20:26.124 "trsvcid": "$NVMF_PORT", 00:20:26.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.124 "hdgst": ${hdgst:-false}, 00:20:26.124 "ddgst": ${ddgst:-false} 00:20:26.124 }, 00:20:26.124 "method": "bdev_nvme_attach_controller" 00:20:26.124 } 00:20:26.124 EOF 00:20:26.124 )") 00:20:26.124 21:35:48 -- nvmf/common.sh@543 -- # cat 00:20:26.124 21:35:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.124 21:35:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.124 { 00:20:26.124 "params": { 00:20:26.124 "name": "Nvme$subsystem", 00:20:26.124 "trtype": "$TEST_TRANSPORT", 00:20:26.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.124 "adrfam": "ipv4", 00:20:26.124 "trsvcid": "$NVMF_PORT", 00:20:26.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.124 "hdgst": ${hdgst:-false}, 00:20:26.124 "ddgst": ${ddgst:-false} 00:20:26.124 }, 00:20:26.124 "method": "bdev_nvme_attach_controller" 00:20:26.124 } 00:20:26.124 EOF 00:20:26.124 )") 00:20:26.124 21:35:48 -- nvmf/common.sh@543 -- # cat 00:20:26.124 21:35:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.124 21:35:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.124 { 00:20:26.124 "params": { 00:20:26.124 "name": "Nvme$subsystem", 00:20:26.124 "trtype": "$TEST_TRANSPORT", 00:20:26.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.124 "adrfam": "ipv4", 00:20:26.124 "trsvcid": "$NVMF_PORT", 00:20:26.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.124 "hdgst": ${hdgst:-false}, 00:20:26.124 "ddgst": ${ddgst:-false} 00:20:26.124 }, 00:20:26.124 "method": "bdev_nvme_attach_controller" 00:20:26.124 } 00:20:26.124 EOF 00:20:26.124 )") 00:20:26.124 21:35:48 -- nvmf/common.sh@543 -- # cat 00:20:26.124 21:35:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.124 21:35:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.124 { 00:20:26.124 "params": { 00:20:26.124 "name": "Nvme$subsystem", 00:20:26.124 "trtype": "$TEST_TRANSPORT", 00:20:26.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.124 "adrfam": "ipv4", 00:20:26.124 "trsvcid": "$NVMF_PORT", 00:20:26.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.124 "hdgst": ${hdgst:-false}, 00:20:26.124 "ddgst": ${ddgst:-false} 00:20:26.124 }, 00:20:26.124 "method": "bdev_nvme_attach_controller" 00:20:26.124 } 00:20:26.124 EOF 00:20:26.125 )") 00:20:26.125 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.125 21:35:48 -- nvmf/common.sh@543 -- # cat 00:20:26.125 21:35:48 -- nvmf/common.sh@545 -- # jq . 00:20:26.125 21:35:48 -- nvmf/common.sh@546 -- # IFS=, 00:20:26.125 21:35:48 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:26.125 "params": { 00:20:26.125 "name": "Nvme1", 00:20:26.125 "trtype": "tcp", 00:20:26.125 "traddr": "10.0.0.2", 00:20:26.125 "adrfam": "ipv4", 00:20:26.125 "trsvcid": "4420", 00:20:26.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:26.125 "hdgst": false, 00:20:26.125 "ddgst": false 00:20:26.125 }, 00:20:26.125 "method": "bdev_nvme_attach_controller" 00:20:26.125 },{ 00:20:26.125 "params": { 00:20:26.125 "name": "Nvme2", 00:20:26.125 "trtype": "tcp", 00:20:26.125 "traddr": "10.0.0.2", 00:20:26.125 "adrfam": "ipv4", 00:20:26.125 "trsvcid": "4420", 00:20:26.125 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:26.125 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:26.125 "hdgst": false, 00:20:26.125 "ddgst": false 00:20:26.125 }, 00:20:26.125 "method": "bdev_nvme_attach_controller" 00:20:26.125 },{ 00:20:26.125 "params": { 00:20:26.125 "name": "Nvme3", 00:20:26.125 "trtype": "tcp", 00:20:26.125 "traddr": "10.0.0.2", 00:20:26.125 "adrfam": "ipv4", 00:20:26.125 "trsvcid": "4420", 00:20:26.125 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:26.125 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:26.125 "hdgst": false, 00:20:26.125 "ddgst": false 00:20:26.125 }, 00:20:26.125 "method": "bdev_nvme_attach_controller" 00:20:26.125 },{ 00:20:26.125 "params": { 00:20:26.125 "name": "Nvme4", 00:20:26.125 "trtype": "tcp", 00:20:26.125 "traddr": "10.0.0.2", 00:20:26.125 "adrfam": "ipv4", 00:20:26.125 "trsvcid": "4420", 00:20:26.125 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:26.125 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:26.125 "hdgst": false, 00:20:26.125 "ddgst": false 00:20:26.125 }, 00:20:26.125 "method": "bdev_nvme_attach_controller" 00:20:26.125 },{ 00:20:26.125 "params": { 00:20:26.125 "name": "Nvme5", 00:20:26.125 "trtype": "tcp", 00:20:26.125 "traddr": "10.0.0.2", 00:20:26.125 "adrfam": "ipv4", 00:20:26.125 "trsvcid": "4420", 00:20:26.125 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:26.125 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:26.125 "hdgst": false, 00:20:26.125 "ddgst": false 00:20:26.125 }, 00:20:26.125 "method": "bdev_nvme_attach_controller" 00:20:26.125 },{ 00:20:26.125 "params": { 00:20:26.125 "name": "Nvme6", 00:20:26.125 "trtype": "tcp", 00:20:26.125 "traddr": "10.0.0.2", 00:20:26.125 "adrfam": "ipv4", 00:20:26.125 "trsvcid": "4420", 00:20:26.125 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:26.125 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:26.125 "hdgst": false, 00:20:26.125 "ddgst": false 00:20:26.125 }, 00:20:26.125 "method": "bdev_nvme_attach_controller" 00:20:26.125 },{ 00:20:26.125 "params": { 00:20:26.125 "name": "Nvme7", 00:20:26.125 "trtype": "tcp", 00:20:26.125 "traddr": "10.0.0.2", 00:20:26.125 "adrfam": "ipv4", 00:20:26.125 "trsvcid": "4420", 00:20:26.125 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:26.125 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:26.125 "hdgst": false, 00:20:26.125 "ddgst": false 00:20:26.125 }, 00:20:26.125 "method": "bdev_nvme_attach_controller" 00:20:26.125 },{ 00:20:26.125 "params": { 00:20:26.125 "name": "Nvme8", 00:20:26.125 "trtype": "tcp", 00:20:26.125 "traddr": "10.0.0.2", 00:20:26.125 "adrfam": "ipv4", 00:20:26.125 "trsvcid": "4420", 00:20:26.125 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:26.125 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:26.125 "hdgst": false, 00:20:26.125 "ddgst": false 00:20:26.125 }, 00:20:26.125 "method": "bdev_nvme_attach_controller" 00:20:26.125 },{ 00:20:26.125 "params": { 00:20:26.125 "name": "Nvme9", 00:20:26.125 "trtype": "tcp", 00:20:26.125 "traddr": "10.0.0.2", 00:20:26.125 "adrfam": "ipv4", 00:20:26.125 "trsvcid": "4420", 00:20:26.125 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:26.125 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:26.125 "hdgst": false, 00:20:26.125 "ddgst": false 00:20:26.125 }, 00:20:26.125 "method": "bdev_nvme_attach_controller" 00:20:26.125 },{ 00:20:26.125 "params": { 00:20:26.125 "name": "Nvme10", 00:20:26.125 "trtype": "tcp", 00:20:26.125 "traddr": "10.0.0.2", 00:20:26.125 "adrfam": "ipv4", 00:20:26.125 "trsvcid": "4420", 00:20:26.125 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:26.125 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:26.125 "hdgst": false, 00:20:26.125 "ddgst": false 00:20:26.125 }, 00:20:26.125 "method": "bdev_nvme_attach_controller" 00:20:26.125 }' 00:20:26.385 [2024-04-24 21:35:49.028921] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.385 [2024-04-24 21:35:49.096994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.763 Running I/O for 10 seconds... 00:20:27.763 21:35:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:27.763 21:35:50 -- common/autotest_common.sh@850 -- # return 0 00:20:27.763 21:35:50 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:27.763 21:35:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.763 21:35:50 -- common/autotest_common.sh@10 -- # set +x 00:20:28.022 21:35:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.022 21:35:50 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:28.022 21:35:50 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:28.022 21:35:50 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:28.022 21:35:50 -- target/shutdown.sh@57 -- # local ret=1 00:20:28.022 21:35:50 -- target/shutdown.sh@58 -- # local i 00:20:28.022 21:35:50 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:28.022 21:35:50 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:28.022 21:35:50 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:28.022 21:35:50 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:28.022 21:35:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.022 21:35:50 -- common/autotest_common.sh@10 -- # set +x 00:20:28.022 21:35:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.022 21:35:50 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:28.022 21:35:50 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:28.022 21:35:50 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:28.281 21:35:51 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:28.281 21:35:51 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:28.281 21:35:51 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:28.281 21:35:51 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:28.281 21:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.281 21:35:51 -- common/autotest_common.sh@10 -- # set +x 00:20:28.281 21:35:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.281 21:35:51 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:28.281 21:35:51 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:28.281 21:35:51 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:28.540 21:35:51 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:28.540 21:35:51 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:28.540 21:35:51 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:28.540 21:35:51 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:28.540 21:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.540 21:35:51 -- common/autotest_common.sh@10 -- # set +x 00:20:28.540 21:35:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.540 21:35:51 -- target/shutdown.sh@60 -- # read_io_count=193 00:20:28.540 21:35:51 -- target/shutdown.sh@63 -- # '[' 193 -ge 100 ']' 00:20:28.800 21:35:51 -- target/shutdown.sh@64 -- # ret=0 00:20:28.800 21:35:51 -- target/shutdown.sh@65 -- # break 00:20:28.800 21:35:51 -- target/shutdown.sh@69 -- # return 0 00:20:28.800 21:35:51 -- target/shutdown.sh@110 -- # killprocess 2910269 00:20:28.800 21:35:51 -- common/autotest_common.sh@936 -- # '[' -z 2910269 ']' 00:20:28.800 21:35:51 -- common/autotest_common.sh@940 -- # kill -0 2910269 00:20:28.800 21:35:51 -- common/autotest_common.sh@941 -- # uname 00:20:28.800 21:35:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:28.800 21:35:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2910269 00:20:28.800 21:35:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:28.800 21:35:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:28.800 21:35:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2910269' 00:20:28.800 killing process with pid 2910269 00:20:28.800 21:35:51 -- common/autotest_common.sh@955 -- # kill 2910269 00:20:28.800 21:35:51 -- common/autotest_common.sh@960 -- # wait 2910269 00:20:28.800 Received shutdown signal, test time was about 0.948506 seconds 00:20:28.800 00:20:28.800 Latency(us) 00:20:28.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.800 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.800 Verification LBA range: start 0x0 length 0x400 00:20:28.800 Nvme1n1 : 0.91 282.70 17.67 0.00 0.00 224112.84 19922.94 216426.09 00:20:28.800 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.800 Verification LBA range: start 0x0 length 0x400 00:20:28.800 Nvme2n1 : 0.89 287.22 17.95 0.00 0.00 216813.98 20132.66 209715.20 00:20:28.800 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.800 Verification LBA range: start 0x0 length 0x400 00:20:28.800 Nvme3n1 : 0.92 345.88 21.62 0.00 0.00 176520.70 17511.22 196293.43 00:20:28.800 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.800 Verification LBA range: start 0x0 length 0x400 00:20:28.800 Nvme4n1 : 0.88 290.62 18.16 0.00 0.00 206672.28 20027.80 203843.17 00:20:28.800 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.800 Verification LBA range: start 0x0 length 0x400 00:20:28.800 Nvme5n1 : 0.89 216.80 13.55 0.00 0.00 271845.79 20656.95 241591.91 00:20:28.800 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.800 Verification LBA range: start 0x0 length 0x400 00:20:28.800 Nvme6n1 : 0.95 270.08 16.88 0.00 0.00 206460.11 19293.80 212231.78 00:20:28.800 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.800 Verification LBA range: start 0x0 length 0x400 00:20:28.800 Nvme7n1 : 0.91 281.92 17.62 0.00 0.00 201175.45 31876.71 198810.01 00:20:28.800 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.800 Verification LBA range: start 0x0 length 0x400 00:20:28.800 Nvme8n1 : 0.92 279.59 17.47 0.00 0.00 200340.48 21076.38 213070.64 00:20:28.800 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.800 Verification LBA range: start 0x0 length 0x400 00:20:28.800 Nvme9n1 : 0.89 214.74 13.42 0.00 0.00 255330.99 20237.52 244947.35 00:20:28.800 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:28.800 Verification LBA range: start 0x0 length 0x400 00:20:28.800 Nvme10n1 : 0.91 279.84 17.49 0.00 0.00 192781.31 19084.08 226492.42 00:20:28.800 =================================================================================================================== 00:20:28.800 Total : 2749.38 171.84 0.00 0.00 211746.41 17511.22 244947.35 00:20:29.060 21:35:51 -- target/shutdown.sh@113 -- # sleep 1 00:20:29.998 21:35:52 -- target/shutdown.sh@114 -- # kill -0 2909964 00:20:29.998 21:35:52 -- target/shutdown.sh@116 -- # stoptarget 00:20:29.998 21:35:52 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:29.998 21:35:52 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:29.998 21:35:52 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:29.998 21:35:52 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:29.998 21:35:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:29.998 21:35:52 -- nvmf/common.sh@117 -- # sync 00:20:29.998 21:35:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:29.998 21:35:52 -- nvmf/common.sh@120 -- # set +e 00:20:29.998 21:35:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:29.998 21:35:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:29.998 rmmod nvme_tcp 00:20:29.998 rmmod nvme_fabrics 00:20:29.998 rmmod nvme_keyring 00:20:29.998 21:35:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:29.998 21:35:52 -- nvmf/common.sh@124 -- # set -e 00:20:29.998 21:35:52 -- nvmf/common.sh@125 -- # return 0 00:20:29.998 21:35:52 -- nvmf/common.sh@478 -- # '[' -n 2909964 ']' 00:20:29.998 21:35:52 -- nvmf/common.sh@479 -- # killprocess 2909964 00:20:29.998 21:35:52 -- common/autotest_common.sh@936 -- # '[' -z 2909964 ']' 00:20:29.998 21:35:52 -- common/autotest_common.sh@940 -- # kill -0 2909964 00:20:29.998 21:35:52 -- common/autotest_common.sh@941 -- # uname 00:20:29.998 21:35:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:29.998 21:35:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2909964 00:20:30.257 21:35:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:30.257 21:35:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:30.257 21:35:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2909964' 00:20:30.257 killing process with pid 2909964 00:20:30.257 21:35:52 -- common/autotest_common.sh@955 -- # kill 2909964 00:20:30.257 21:35:52 -- common/autotest_common.sh@960 -- # wait 2909964 00:20:30.517 21:35:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:30.517 21:35:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:30.517 21:35:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:30.517 21:35:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:30.517 21:35:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:30.517 21:35:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.517 21:35:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:30.517 21:35:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.089 21:35:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:33.089 00:20:33.089 real 0m8.275s 00:20:33.089 user 0m25.147s 00:20:33.089 sys 0m1.621s 00:20:33.089 21:35:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:33.089 21:35:55 -- common/autotest_common.sh@10 -- # set +x 00:20:33.089 ************************************ 00:20:33.089 END TEST nvmf_shutdown_tc2 00:20:33.089 ************************************ 00:20:33.089 21:35:55 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:33.089 21:35:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:33.089 21:35:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:33.089 21:35:55 -- common/autotest_common.sh@10 -- # set +x 00:20:33.089 ************************************ 00:20:33.089 START TEST nvmf_shutdown_tc3 00:20:33.089 ************************************ 00:20:33.089 21:35:55 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:20:33.089 21:35:55 -- target/shutdown.sh@121 -- # starttarget 00:20:33.089 21:35:55 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:33.089 21:35:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:33.089 21:35:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.089 21:35:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:33.089 21:35:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:33.089 21:35:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:33.089 21:35:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.089 21:35:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.089 21:35:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.089 21:35:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:33.089 21:35:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:33.089 21:35:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:33.089 21:35:55 -- common/autotest_common.sh@10 -- # set +x 00:20:33.089 21:35:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:33.089 21:35:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:33.089 21:35:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:33.089 21:35:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:33.089 21:35:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:33.089 21:35:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:33.089 21:35:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:33.089 21:35:55 -- nvmf/common.sh@295 -- # net_devs=() 00:20:33.089 21:35:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:33.089 21:35:55 -- nvmf/common.sh@296 -- # e810=() 00:20:33.089 21:35:55 -- nvmf/common.sh@296 -- # local -ga e810 00:20:33.089 21:35:55 -- nvmf/common.sh@297 -- # x722=() 00:20:33.089 21:35:55 -- nvmf/common.sh@297 -- # local -ga x722 00:20:33.089 21:35:55 -- nvmf/common.sh@298 -- # mlx=() 00:20:33.089 21:35:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:33.089 21:35:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:33.089 21:35:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:33.089 21:35:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:33.089 21:35:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:33.089 21:35:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:33.089 21:35:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:33.089 21:35:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:33.089 21:35:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:33.089 21:35:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:33.089 21:35:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:33.089 21:35:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:33.089 21:35:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:33.089 21:35:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:33.089 21:35:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:33.089 21:35:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:33.089 21:35:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:33.089 21:35:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:33.089 21:35:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.089 21:35:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:33.089 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:33.089 21:35:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.089 21:35:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.089 21:35:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.089 21:35:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.089 21:35:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.089 21:35:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.089 21:35:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:33.089 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:33.089 21:35:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.089 21:35:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.089 21:35:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.089 21:35:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.089 21:35:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.089 21:35:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:33.089 21:35:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:33.089 21:35:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:33.089 21:35:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.089 21:35:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.089 21:35:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:33.089 21:35:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.089 21:35:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:33.089 Found net devices under 0000:af:00.0: cvl_0_0 00:20:33.089 21:35:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.089 21:35:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.089 21:35:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.089 21:35:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:33.089 21:35:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.089 21:35:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:33.089 Found net devices under 0000:af:00.1: cvl_0_1 00:20:33.089 21:35:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.089 21:35:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:33.089 21:35:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:33.089 21:35:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:33.089 21:35:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:33.089 21:35:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:33.089 21:35:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.089 21:35:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.090 21:35:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:33.090 21:35:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:33.090 21:35:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:33.090 21:35:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:33.090 21:35:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:33.090 21:35:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:33.090 21:35:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.090 21:35:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:33.090 21:35:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:33.090 21:35:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:33.090 21:35:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:33.090 21:35:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:33.090 21:35:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:33.090 21:35:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:33.090 21:35:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:33.090 21:35:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:33.090 21:35:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:33.090 21:35:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:33.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:20:33.090 00:20:33.090 --- 10.0.0.2 ping statistics --- 00:20:33.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.090 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:20:33.090 21:35:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:33.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:20:33.090 00:20:33.090 --- 10.0.0.1 ping statistics --- 00:20:33.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.090 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:20:33.090 21:35:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.090 21:35:55 -- nvmf/common.sh@411 -- # return 0 00:20:33.090 21:35:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:33.090 21:35:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.090 21:35:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:33.090 21:35:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:33.090 21:35:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.090 21:35:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:33.090 21:35:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:33.353 21:35:55 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:33.353 21:35:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:33.353 21:35:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:33.353 21:35:55 -- common/autotest_common.sh@10 -- # set +x 00:20:33.353 21:35:56 -- nvmf/common.sh@470 -- # nvmfpid=2911713 00:20:33.353 21:35:56 -- nvmf/common.sh@471 -- # waitforlisten 2911713 00:20:33.353 21:35:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:33.353 21:35:56 -- common/autotest_common.sh@817 -- # '[' -z 2911713 ']' 00:20:33.353 21:35:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.353 21:35:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:33.353 21:35:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.353 21:35:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:33.353 21:35:56 -- common/autotest_common.sh@10 -- # set +x 00:20:33.353 [2024-04-24 21:35:56.063143] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:20:33.353 [2024-04-24 21:35:56.063196] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.353 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.353 [2024-04-24 21:35:56.137127] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:33.353 [2024-04-24 21:35:56.209749] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.353 [2024-04-24 21:35:56.209785] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.353 [2024-04-24 21:35:56.209794] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.353 [2024-04-24 21:35:56.209802] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.353 [2024-04-24 21:35:56.209812] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.353 [2024-04-24 21:35:56.209914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.353 [2024-04-24 21:35:56.209994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:33.353 [2024-04-24 21:35:56.210105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.353 [2024-04-24 21:35:56.210106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:34.292 21:35:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:34.292 21:35:56 -- common/autotest_common.sh@850 -- # return 0 00:20:34.292 21:35:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:34.292 21:35:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:34.292 21:35:56 -- common/autotest_common.sh@10 -- # set +x 00:20:34.292 21:35:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.292 21:35:56 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:34.292 21:35:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.292 21:35:56 -- common/autotest_common.sh@10 -- # set +x 00:20:34.292 [2024-04-24 21:35:56.909101] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.293 21:35:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.293 21:35:56 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:34.293 21:35:56 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:34.293 21:35:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:34.293 21:35:56 -- common/autotest_common.sh@10 -- # set +x 00:20:34.293 21:35:56 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:34.293 21:35:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.293 21:35:56 -- target/shutdown.sh@28 -- # cat 00:20:34.293 21:35:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.293 21:35:56 -- target/shutdown.sh@28 -- # cat 00:20:34.293 21:35:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.293 21:35:56 -- target/shutdown.sh@28 -- # cat 00:20:34.293 21:35:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.293 21:35:56 -- target/shutdown.sh@28 -- # cat 00:20:34.293 21:35:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.293 21:35:56 -- target/shutdown.sh@28 -- # cat 00:20:34.293 21:35:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.293 21:35:56 -- target/shutdown.sh@28 -- # cat 00:20:34.293 21:35:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.293 21:35:56 -- target/shutdown.sh@28 -- # cat 00:20:34.293 21:35:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.293 21:35:56 -- target/shutdown.sh@28 -- # cat 00:20:34.293 21:35:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.293 21:35:56 -- target/shutdown.sh@28 -- # cat 00:20:34.293 21:35:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:34.293 21:35:56 -- target/shutdown.sh@28 -- # cat 00:20:34.293 21:35:56 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:34.293 21:35:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.293 21:35:56 -- common/autotest_common.sh@10 -- # set +x 00:20:34.293 Malloc1 00:20:34.293 [2024-04-24 21:35:57.019803] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.293 Malloc2 00:20:34.293 Malloc3 00:20:34.293 Malloc4 00:20:34.293 Malloc5 00:20:34.553 Malloc6 00:20:34.553 Malloc7 00:20:34.553 Malloc8 00:20:34.553 Malloc9 00:20:34.553 Malloc10 00:20:34.553 21:35:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.553 21:35:57 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:34.553 21:35:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:34.553 21:35:57 -- common/autotest_common.sh@10 -- # set +x 00:20:34.814 21:35:57 -- target/shutdown.sh@125 -- # perfpid=2912017 00:20:34.814 21:35:57 -- target/shutdown.sh@126 -- # waitforlisten 2912017 /var/tmp/bdevperf.sock 00:20:34.814 21:35:57 -- common/autotest_common.sh@817 -- # '[' -z 2912017 ']' 00:20:34.814 21:35:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.814 21:35:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:34.814 21:35:57 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:34.814 21:35:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.814 21:35:57 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:34.814 21:35:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:34.814 21:35:57 -- nvmf/common.sh@521 -- # config=() 00:20:34.814 21:35:57 -- common/autotest_common.sh@10 -- # set +x 00:20:34.814 21:35:57 -- nvmf/common.sh@521 -- # local subsystem config 00:20:34.814 21:35:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:34.814 21:35:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:34.814 { 00:20:34.814 "params": { 00:20:34.814 "name": "Nvme$subsystem", 00:20:34.814 "trtype": "$TEST_TRANSPORT", 00:20:34.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.814 "adrfam": "ipv4", 00:20:34.814 "trsvcid": "$NVMF_PORT", 00:20:34.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.814 "hdgst": ${hdgst:-false}, 00:20:34.814 "ddgst": ${ddgst:-false} 00:20:34.814 }, 00:20:34.814 "method": "bdev_nvme_attach_controller" 00:20:34.814 } 00:20:34.814 EOF 00:20:34.814 )") 00:20:34.814 21:35:57 -- nvmf/common.sh@543 -- # cat 00:20:34.814 21:35:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:34.814 21:35:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:34.814 { 00:20:34.814 "params": { 00:20:34.814 "name": "Nvme$subsystem", 00:20:34.814 "trtype": "$TEST_TRANSPORT", 00:20:34.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.814 "adrfam": "ipv4", 00:20:34.814 "trsvcid": "$NVMF_PORT", 00:20:34.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.814 "hdgst": ${hdgst:-false}, 00:20:34.814 "ddgst": ${ddgst:-false} 00:20:34.814 }, 00:20:34.814 "method": "bdev_nvme_attach_controller" 00:20:34.814 } 00:20:34.814 EOF 00:20:34.814 )") 00:20:34.814 21:35:57 -- nvmf/common.sh@543 -- # cat 00:20:34.814 21:35:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:34.814 21:35:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:34.814 { 00:20:34.814 "params": { 00:20:34.814 "name": "Nvme$subsystem", 00:20:34.814 "trtype": "$TEST_TRANSPORT", 00:20:34.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.814 "adrfam": "ipv4", 00:20:34.814 "trsvcid": "$NVMF_PORT", 00:20:34.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.814 "hdgst": ${hdgst:-false}, 00:20:34.814 "ddgst": ${ddgst:-false} 00:20:34.814 }, 00:20:34.814 "method": "bdev_nvme_attach_controller" 00:20:34.814 } 00:20:34.814 EOF 00:20:34.814 )") 00:20:34.814 21:35:57 -- nvmf/common.sh@543 -- # cat 00:20:34.814 21:35:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:34.814 21:35:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:34.814 { 00:20:34.814 "params": { 00:20:34.814 "name": "Nvme$subsystem", 00:20:34.814 "trtype": "$TEST_TRANSPORT", 00:20:34.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.814 "adrfam": "ipv4", 00:20:34.814 "trsvcid": "$NVMF_PORT", 00:20:34.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.814 "hdgst": ${hdgst:-false}, 00:20:34.814 "ddgst": ${ddgst:-false} 00:20:34.814 }, 00:20:34.814 "method": "bdev_nvme_attach_controller" 00:20:34.814 } 00:20:34.814 EOF 00:20:34.814 )") 00:20:34.814 21:35:57 -- nvmf/common.sh@543 -- # cat 00:20:34.814 21:35:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:34.814 21:35:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:34.814 { 00:20:34.814 "params": { 00:20:34.814 "name": "Nvme$subsystem", 00:20:34.814 "trtype": "$TEST_TRANSPORT", 00:20:34.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.814 "adrfam": "ipv4", 00:20:34.814 "trsvcid": "$NVMF_PORT", 00:20:34.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.814 "hdgst": ${hdgst:-false}, 00:20:34.814 "ddgst": ${ddgst:-false} 00:20:34.814 }, 00:20:34.814 "method": "bdev_nvme_attach_controller" 00:20:34.814 } 00:20:34.814 EOF 00:20:34.814 )") 00:20:34.814 21:35:57 -- nvmf/common.sh@543 -- # cat 00:20:34.814 21:35:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:34.814 21:35:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:34.814 { 00:20:34.814 "params": { 00:20:34.814 "name": "Nvme$subsystem", 00:20:34.814 "trtype": "$TEST_TRANSPORT", 00:20:34.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.814 "adrfam": "ipv4", 00:20:34.814 "trsvcid": "$NVMF_PORT", 00:20:34.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.814 "hdgst": ${hdgst:-false}, 00:20:34.814 "ddgst": ${ddgst:-false} 00:20:34.815 }, 00:20:34.815 "method": "bdev_nvme_attach_controller" 00:20:34.815 } 00:20:34.815 EOF 00:20:34.815 )") 00:20:34.815 21:35:57 -- nvmf/common.sh@543 -- # cat 00:20:34.815 [2024-04-24 21:35:57.503557] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:20:34.815 [2024-04-24 21:35:57.503611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2912017 ] 00:20:34.815 21:35:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:34.815 21:35:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:34.815 { 00:20:34.815 "params": { 00:20:34.815 "name": "Nvme$subsystem", 00:20:34.815 "trtype": "$TEST_TRANSPORT", 00:20:34.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.815 "adrfam": "ipv4", 00:20:34.815 "trsvcid": "$NVMF_PORT", 00:20:34.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.815 "hdgst": ${hdgst:-false}, 00:20:34.815 "ddgst": ${ddgst:-false} 00:20:34.815 }, 00:20:34.815 "method": "bdev_nvme_attach_controller" 00:20:34.815 } 00:20:34.815 EOF 00:20:34.815 )") 00:20:34.815 21:35:57 -- nvmf/common.sh@543 -- # cat 00:20:34.815 21:35:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:34.815 21:35:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:34.815 { 00:20:34.815 "params": { 00:20:34.815 "name": "Nvme$subsystem", 00:20:34.815 "trtype": "$TEST_TRANSPORT", 00:20:34.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.815 "adrfam": "ipv4", 00:20:34.815 "trsvcid": "$NVMF_PORT", 00:20:34.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.815 "hdgst": ${hdgst:-false}, 00:20:34.815 "ddgst": ${ddgst:-false} 00:20:34.815 }, 00:20:34.815 "method": "bdev_nvme_attach_controller" 00:20:34.815 } 00:20:34.815 EOF 00:20:34.815 )") 00:20:34.815 21:35:57 -- nvmf/common.sh@543 -- # cat 00:20:34.815 21:35:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:34.815 21:35:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:34.815 { 00:20:34.815 "params": { 00:20:34.815 "name": "Nvme$subsystem", 00:20:34.815 "trtype": "$TEST_TRANSPORT", 00:20:34.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.815 "adrfam": "ipv4", 00:20:34.815 "trsvcid": "$NVMF_PORT", 00:20:34.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.815 "hdgst": ${hdgst:-false}, 00:20:34.815 "ddgst": ${ddgst:-false} 00:20:34.815 }, 00:20:34.815 "method": "bdev_nvme_attach_controller" 00:20:34.815 } 00:20:34.815 EOF 00:20:34.815 )") 00:20:34.815 21:35:57 -- nvmf/common.sh@543 -- # cat 00:20:34.815 21:35:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:34.815 21:35:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:34.815 { 00:20:34.815 "params": { 00:20:34.815 "name": "Nvme$subsystem", 00:20:34.815 "trtype": "$TEST_TRANSPORT", 00:20:34.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.815 "adrfam": "ipv4", 00:20:34.815 "trsvcid": "$NVMF_PORT", 00:20:34.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.815 "hdgst": ${hdgst:-false}, 00:20:34.815 "ddgst": ${ddgst:-false} 00:20:34.815 }, 00:20:34.815 "method": "bdev_nvme_attach_controller" 00:20:34.815 } 00:20:34.815 EOF 00:20:34.815 )") 00:20:34.815 21:35:57 -- nvmf/common.sh@543 -- # cat 00:20:34.815 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.815 21:35:57 -- nvmf/common.sh@545 -- # jq . 00:20:34.815 21:35:57 -- nvmf/common.sh@546 -- # IFS=, 00:20:34.815 21:35:57 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:34.815 "params": { 00:20:34.815 "name": "Nvme1", 00:20:34.815 "trtype": "tcp", 00:20:34.815 "traddr": "10.0.0.2", 00:20:34.815 "adrfam": "ipv4", 00:20:34.815 "trsvcid": "4420", 00:20:34.815 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.815 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:34.815 "hdgst": false, 00:20:34.815 "ddgst": false 00:20:34.815 }, 00:20:34.815 "method": "bdev_nvme_attach_controller" 00:20:34.815 },{ 00:20:34.815 "params": { 00:20:34.815 "name": "Nvme2", 00:20:34.815 "trtype": "tcp", 00:20:34.815 "traddr": "10.0.0.2", 00:20:34.815 "adrfam": "ipv4", 00:20:34.815 "trsvcid": "4420", 00:20:34.815 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:34.815 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:34.815 "hdgst": false, 00:20:34.815 "ddgst": false 00:20:34.815 }, 00:20:34.815 "method": "bdev_nvme_attach_controller" 00:20:34.815 },{ 00:20:34.815 "params": { 00:20:34.815 "name": "Nvme3", 00:20:34.815 "trtype": "tcp", 00:20:34.815 "traddr": "10.0.0.2", 00:20:34.815 "adrfam": "ipv4", 00:20:34.815 "trsvcid": "4420", 00:20:34.815 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:34.815 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:34.815 "hdgst": false, 00:20:34.815 "ddgst": false 00:20:34.815 }, 00:20:34.815 "method": "bdev_nvme_attach_controller" 00:20:34.815 },{ 00:20:34.815 "params": { 00:20:34.815 "name": "Nvme4", 00:20:34.815 "trtype": "tcp", 00:20:34.815 "traddr": "10.0.0.2", 00:20:34.815 "adrfam": "ipv4", 00:20:34.815 "trsvcid": "4420", 00:20:34.815 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:34.815 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:34.815 "hdgst": false, 00:20:34.815 "ddgst": false 00:20:34.815 }, 00:20:34.815 "method": "bdev_nvme_attach_controller" 00:20:34.815 },{ 00:20:34.815 "params": { 00:20:34.815 "name": "Nvme5", 00:20:34.815 "trtype": "tcp", 00:20:34.815 "traddr": "10.0.0.2", 00:20:34.815 "adrfam": "ipv4", 00:20:34.815 "trsvcid": "4420", 00:20:34.815 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:34.815 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:34.815 "hdgst": false, 00:20:34.815 "ddgst": false 00:20:34.815 }, 00:20:34.815 "method": "bdev_nvme_attach_controller" 00:20:34.815 },{ 00:20:34.815 "params": { 00:20:34.815 "name": "Nvme6", 00:20:34.815 "trtype": "tcp", 00:20:34.815 "traddr": "10.0.0.2", 00:20:34.815 "adrfam": "ipv4", 00:20:34.815 "trsvcid": "4420", 00:20:34.815 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:34.815 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:34.815 "hdgst": false, 00:20:34.815 "ddgst": false 00:20:34.815 }, 00:20:34.815 "method": "bdev_nvme_attach_controller" 00:20:34.815 },{ 00:20:34.815 "params": { 00:20:34.815 "name": "Nvme7", 00:20:34.815 "trtype": "tcp", 00:20:34.815 "traddr": "10.0.0.2", 00:20:34.815 "adrfam": "ipv4", 00:20:34.815 "trsvcid": "4420", 00:20:34.815 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:34.815 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:34.815 "hdgst": false, 00:20:34.815 "ddgst": false 00:20:34.815 }, 00:20:34.815 "method": "bdev_nvme_attach_controller" 00:20:34.815 },{ 00:20:34.815 "params": { 00:20:34.815 "name": "Nvme8", 00:20:34.815 "trtype": "tcp", 00:20:34.815 "traddr": "10.0.0.2", 00:20:34.815 "adrfam": "ipv4", 00:20:34.815 "trsvcid": "4420", 00:20:34.815 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:34.815 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:34.815 "hdgst": false, 00:20:34.815 "ddgst": false 00:20:34.815 }, 00:20:34.815 "method": "bdev_nvme_attach_controller" 00:20:34.815 },{ 00:20:34.815 "params": { 00:20:34.815 "name": "Nvme9", 00:20:34.815 "trtype": "tcp", 00:20:34.815 "traddr": "10.0.0.2", 00:20:34.815 "adrfam": "ipv4", 00:20:34.815 "trsvcid": "4420", 00:20:34.815 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:34.815 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:34.815 "hdgst": false, 00:20:34.815 "ddgst": false 00:20:34.815 }, 00:20:34.815 "method": "bdev_nvme_attach_controller" 00:20:34.815 },{ 00:20:34.815 "params": { 00:20:34.815 "name": "Nvme10", 00:20:34.815 "trtype": "tcp", 00:20:34.815 "traddr": "10.0.0.2", 00:20:34.815 "adrfam": "ipv4", 00:20:34.815 "trsvcid": "4420", 00:20:34.815 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:34.815 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:34.815 "hdgst": false, 00:20:34.815 "ddgst": false 00:20:34.815 }, 00:20:34.815 "method": "bdev_nvme_attach_controller" 00:20:34.815 }' 00:20:34.816 [2024-04-24 21:35:57.574675] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.816 [2024-04-24 21:35:57.642108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.723 Running I/O for 10 seconds... 00:20:37.300 21:36:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:37.300 21:36:00 -- common/autotest_common.sh@850 -- # return 0 00:20:37.300 21:36:00 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:37.300 21:36:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.300 21:36:00 -- common/autotest_common.sh@10 -- # set +x 00:20:37.300 21:36:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.300 21:36:00 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:37.300 21:36:00 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:37.300 21:36:00 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:37.300 21:36:00 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:37.300 21:36:00 -- target/shutdown.sh@57 -- # local ret=1 00:20:37.300 21:36:00 -- target/shutdown.sh@58 -- # local i 00:20:37.300 21:36:00 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:37.300 21:36:00 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:37.301 21:36:00 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:37.301 21:36:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.301 21:36:00 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:37.301 21:36:00 -- common/autotest_common.sh@10 -- # set +x 00:20:37.301 21:36:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.301 21:36:00 -- target/shutdown.sh@60 -- # read_io_count=129 00:20:37.301 21:36:00 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:20:37.301 21:36:00 -- target/shutdown.sh@64 -- # ret=0 00:20:37.301 21:36:00 -- target/shutdown.sh@65 -- # break 00:20:37.301 21:36:00 -- target/shutdown.sh@69 -- # return 0 00:20:37.301 21:36:00 -- target/shutdown.sh@135 -- # killprocess 2911713 00:20:37.301 21:36:00 -- common/autotest_common.sh@936 -- # '[' -z 2911713 ']' 00:20:37.301 21:36:00 -- common/autotest_common.sh@940 -- # kill -0 2911713 00:20:37.301 21:36:00 -- common/autotest_common.sh@941 -- # uname 00:20:37.301 21:36:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:37.301 21:36:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2911713 00:20:37.301 21:36:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:37.301 21:36:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:37.301 21:36:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2911713' 00:20:37.301 killing process with pid 2911713 00:20:37.301 21:36:00 -- common/autotest_common.sh@955 -- # kill 2911713 00:20:37.301 21:36:00 -- common/autotest_common.sh@960 -- # wait 2911713 00:20:37.301 [2024-04-24 21:36:00.141776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a8260 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.142611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aabb0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143735] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143921] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.143993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.144277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a86f0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.145379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a8b80 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.146089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9030 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.146665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.146684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.146694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.146704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.301 [2024-04-24 21:36:00.146712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146722] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146793] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.146997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147059] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147102] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a94c0 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147971] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.147994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148164] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148181] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.302 [2024-04-24 21:36:00.148199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148367] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148393] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148410] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148444] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148458] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.148530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9970 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.149434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9e00 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.149996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150059] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150176] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150184] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150222] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150284] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.303 [2024-04-24 21:36:00.150406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.150414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.150423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.150432] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.150440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.150449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.150465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.150473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.150482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.150490] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.150499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.150507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.150516] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.150525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.150534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.150543] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.150552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.150561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa290 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.151096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa720 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.151112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa720 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.151122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa720 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.151131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa720 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.151139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa720 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.151148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa720 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.151157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa720 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.151166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa720 is same with the state(5) to be set 00:20:37.304 [2024-04-24 21:36:00.155492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.155982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.155992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.156002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.156012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.156021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.156031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.156041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.156051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.304 [2024-04-24 21:36:00.156061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.304 [2024-04-24 21:36:00.156071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.305 [2024-04-24 21:36:00.156679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.305 [2024-04-24 21:36:00.156688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.156698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.306 [2024-04-24 21:36:00.156707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.156717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.306 [2024-04-24 21:36:00.156727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.156738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.306 [2024-04-24 21:36:00.156746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.156757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.306 [2024-04-24 21:36:00.156767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.156778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.306 [2024-04-24 21:36:00.156787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.156798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.306 [2024-04-24 21:36:00.156807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157208] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2513d30 was disconnected and freed. reset controller. 00:20:37.306 [2024-04-24 21:36:00.157273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x264ffe0 is same with the state(5) to be set 00:20:37.306 [2024-04-24 21:36:00.157386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x264ca30 is same with the state(5) to be set 00:20:37.306 [2024-04-24 21:36:00.157498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5fe0 is same with the state(5) to be set 00:20:37.306 [2024-04-24 21:36:00.157602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5510 is same with the state(5) to be set 00:20:37.306 [2024-04-24 21:36:00.157705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24953d0 is same with the state(5) to be set 00:20:37.306 [2024-04-24 21:36:00.157809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493fb0 is same with the state(5) to be set 00:20:37.306 [2024-04-24 21:36:00.157913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.157986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.157995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b16f0 is same with the state(5) to be set 00:20:37.306 [2024-04-24 21:36:00.158023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.158037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.306 [2024-04-24 21:36:00.158049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.306 [2024-04-24 21:36:00.158059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.307 [2024-04-24 21:36:00.158079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.307 [2024-04-24 21:36:00.158099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x264ecc0 is same with the state(5) to be set 00:20:37.307 [2024-04-24 21:36:00.158134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.307 [2024-04-24 21:36:00.158145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.307 [2024-04-24 21:36:00.158163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.307 [2024-04-24 21:36:00.158182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.307 [2024-04-24 21:36:00.158203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b22a0 is same with the state(5) to be set 00:20:37.307 [2024-04-24 21:36:00.158238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.307 [2024-04-24 21:36:00.158249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.307 [2024-04-24 21:36:00.158268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.307 [2024-04-24 21:36:00.158287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.307 [2024-04-24 21:36:00.158307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20738e0 is same with the state(5) to be set 00:20:37.307 [2024-04-24 21:36:00.158385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.307 [2024-04-24 21:36:00.158958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.307 [2024-04-24 21:36:00.158969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.158977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.158988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.158997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.159008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.159017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.159029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.159039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.159052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.159063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.159073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.159082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.159093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.159103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.159116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.159125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.159136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.159145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.159156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.159165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.159177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.159186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.159197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.159206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.159217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.159226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.159237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.159246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.159257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.159266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.159277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.159286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.159296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.159308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.159319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.159328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.159338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.166627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.166648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.166657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.166669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.166677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.166688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.166699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.166710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.166720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.166733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.166742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.166753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.166763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.166774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.166783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.166794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.166804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.166815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.166825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.166835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.166844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.166855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.166867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.166878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.166887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.166899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.166909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.166920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.166929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.166941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.166950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.166961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.166971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.166982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.166992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.167065] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x251b740 was disconnected and freed. reset controller. 00:20:37.308 [2024-04-24 21:36:00.167209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.167223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.167237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.167247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.308 [2024-04-24 21:36:00.167258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.308 [2024-04-24 21:36:00.167269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.167981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.167992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.309 [2024-04-24 21:36:00.168004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.309 [2024-04-24 21:36:00.168013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168715] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x262bf50 was disconnected and freed. reset controller. 00:20:37.310 [2024-04-24 21:36:00.168804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.168981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.168994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.169010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.169023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.169038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.169050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.169065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.169078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.169093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.310 [2024-04-24 21:36:00.169105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.310 [2024-04-24 21:36:00.169120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.169978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.169993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.170006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.170020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.170033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.170048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.170060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.170074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.170087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.170101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.170114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.170128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.170141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.170155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.170168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.311 [2024-04-24 21:36:00.170182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.311 [2024-04-24 21:36:00.170196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.170211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.170223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.170238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.170251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.170266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.170278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.170292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.170307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.170321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.170334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.170349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.170361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.170377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.170389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.170404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.170417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.170432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.170444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.170464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.170476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.170492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.170504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.170519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.170532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.170547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.170560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.170575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.170589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.170662] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x251ce70 was disconnected and freed. reset controller. 00:20:37.312 [2024-04-24 21:36:00.173331] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x264ffe0 (9): Bad file descriptor 00:20:37.312 [2024-04-24 21:36:00.173380] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x264ca30 (9): Bad file descriptor 00:20:37.312 [2024-04-24 21:36:00.173402] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d5fe0 (9): Bad file descriptor 00:20:37.312 [2024-04-24 21:36:00.173427] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d5510 (9): Bad file descriptor 00:20:37.312 [2024-04-24 21:36:00.173447] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24953d0 (9): Bad file descriptor 00:20:37.312 [2024-04-24 21:36:00.173473] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2493fb0 (9): Bad file descriptor 00:20:37.312 [2024-04-24 21:36:00.173495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b16f0 (9): Bad file descriptor 00:20:37.312 [2024-04-24 21:36:00.173515] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x264ecc0 (9): Bad file descriptor 00:20:37.312 [2024-04-24 21:36:00.173535] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b22a0 (9): Bad file descriptor 00:20:37.312 [2024-04-24 21:36:00.173553] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20738e0 (9): Bad file descriptor 00:20:37.312 [2024-04-24 21:36:00.177312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.312 [2024-04-24 21:36:00.177920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.312 [2024-04-24 21:36:00.177935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.177948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.177963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.177976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.177993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.313 [2024-04-24 21:36:00.178909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.313 [2024-04-24 21:36:00.178919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.178930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.314 [2024-04-24 21:36:00.178942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.178954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.314 [2024-04-24 21:36:00.178964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.178976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.314 [2024-04-24 21:36:00.178985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.178997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.314 [2024-04-24 21:36:00.179006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.179018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.314 [2024-04-24 21:36:00.179027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.179103] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2480150 was disconnected and freed. reset controller. 00:20:37.314 [2024-04-24 21:36:00.179191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.314 [2024-04-24 21:36:00.179203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.179217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.314 [2024-04-24 21:36:00.179227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.179239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.314 [2024-04-24 21:36:00.179249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.179260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.314 [2024-04-24 21:36:00.179270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.179281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.314 [2024-04-24 21:36:00.179291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.179302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.314 [2024-04-24 21:36:00.179312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.179323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.314 [2024-04-24 21:36:00.179333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.179344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.314 [2024-04-24 21:36:00.179357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.179368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.314 [2024-04-24 21:36:00.179377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.179389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.314 [2024-04-24 21:36:00.179399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.179410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.314 [2024-04-24 21:36:00.179421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.179431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.314 [2024-04-24 21:36:00.179441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.179459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.314 [2024-04-24 21:36:00.179468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.179480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.314 [2024-04-24 21:36:00.179489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.314 [2024-04-24 21:36:00.179500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.581 [2024-04-24 21:36:00.179945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.581 [2024-04-24 21:36:00.179954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.179965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.179975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.179986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.179995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.582 [2024-04-24 21:36:00.180586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.582 [2024-04-24 21:36:00.180657] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2482930 was disconnected and freed. reset controller. 00:20:37.582 [2024-04-24 21:36:00.180693] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:37.582 [2024-04-24 21:36:00.180709] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.582 [2024-04-24 21:36:00.182943] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:37.582 [2024-04-24 21:36:00.182977] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:37.582 [2024-04-24 21:36:00.183187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.582 [2024-04-24 21:36:00.183611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.582 [2024-04-24 21:36:00.183626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d5510 with addr=10.0.0.2, port=4420 00:20:37.582 [2024-04-24 21:36:00.183638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5510 is same with the state(5) to be set 00:20:37.582 [2024-04-24 21:36:00.184043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.582 [2024-04-24 21:36:00.184447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.582 [2024-04-24 21:36:00.184473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20738e0 with addr=10.0.0.2, port=4420 00:20:37.582 [2024-04-24 21:36:00.184485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20738e0 is same with the state(5) to be set 00:20:37.582 [2024-04-24 21:36:00.185227] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:37.582 [2024-04-24 21:36:00.185338] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:37.582 [2024-04-24 21:36:00.185374] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:37.582 [2024-04-24 21:36:00.185395] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:37.582 [2024-04-24 21:36:00.185776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.582 [2024-04-24 21:36:00.186178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.582 [2024-04-24 21:36:00.186191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b22a0 with addr=10.0.0.2, port=4420 00:20:37.582 [2024-04-24 21:36:00.186202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b22a0 is same with the state(5) to be set 00:20:37.582 [2024-04-24 21:36:00.186538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.582 [2024-04-24 21:36:00.186944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.582 [2024-04-24 21:36:00.186963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b16f0 with addr=10.0.0.2, port=4420 00:20:37.582 [2024-04-24 21:36:00.186973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b16f0 is same with the state(5) to be set 00:20:37.582 [2024-04-24 21:36:00.186987] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d5510 (9): Bad file descriptor 00:20:37.582 [2024-04-24 21:36:00.187000] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20738e0 (9): Bad file descriptor 00:20:37.583 [2024-04-24 21:36:00.187099] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:37.583 [2024-04-24 21:36:00.187176] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:37.583 [2024-04-24 21:36:00.187959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.583 [2024-04-24 21:36:00.188249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.583 [2024-04-24 21:36:00.188263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x264ecc0 with addr=10.0.0.2, port=4420 00:20:37.583 [2024-04-24 21:36:00.188274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x264ecc0 is same with the state(5) to be set 00:20:37.583 [2024-04-24 21:36:00.188687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.583 [2024-04-24 21:36:00.189086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.583 [2024-04-24 21:36:00.189099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24953d0 with addr=10.0.0.2, port=4420 00:20:37.583 [2024-04-24 21:36:00.189109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24953d0 is same with the state(5) to be set 00:20:37.583 [2024-04-24 21:36:00.189123] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b22a0 (9): Bad file descriptor 00:20:37.583 [2024-04-24 21:36:00.189135] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b16f0 (9): Bad file descriptor 00:20:37.583 [2024-04-24 21:36:00.189146] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:37.583 [2024-04-24 21:36:00.189156] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:37.583 [2024-04-24 21:36:00.189168] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:37.583 [2024-04-24 21:36:00.189182] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.583 [2024-04-24 21:36:00.189192] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.583 [2024-04-24 21:36:00.189202] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.583 [2024-04-24 21:36:00.189265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.583 [2024-04-24 21:36:00.189926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.583 [2024-04-24 21:36:00.189937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.189946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.189957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.189967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.189978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.189988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.189999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.190625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.190636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x262ab30 is same with the state(5) to be set 00:20:37.584 [2024-04-24 21:36:00.191627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.191642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.191657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.191666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.191678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.191687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.191699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.191709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.191720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.191730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.191741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.584 [2024-04-24 21:36:00.191751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.584 [2024-04-24 21:36:00.191762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.191772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.191782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.191792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.191803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.191813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.191823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.191833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.191844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.191854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.191865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.191877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.191889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.191899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.191910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.191920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.191931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.191940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.191951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.191960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.191972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.191981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.191992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.585 [2024-04-24 21:36:00.192444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.585 [2024-04-24 21:36:00.192460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.192964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.192974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251e320 is same with the state(5) to be set 00:20:37.586 [2024-04-24 21:36:00.193941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.193956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.193970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.193980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.193991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.194001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.194012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.194022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.194033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.194042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.194053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.194063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.194074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.194083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.194095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.194104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.194115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.194124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.194135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.194150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.194162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.194171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.194182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.586 [2024-04-24 21:36:00.194192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.586 [2024-04-24 21:36:00.194203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.194981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.194992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.587 [2024-04-24 21:36:00.195001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.587 [2024-04-24 21:36:00.195012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.195021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.195032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.195041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.195052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.195062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.195073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.195082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.195093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.195103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.195114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.195123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.195134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.195144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.195155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.195164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.195175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.195185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.195198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.195210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.195221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.195230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.195242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.195251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.195262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.195271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.195281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x247eca0 is same with the state(5) to be set 00:20:37.588 [2024-04-24 21:36:00.196265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.588 [2024-04-24 21:36:00.196807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.588 [2024-04-24 21:36:00.196818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.196827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.196838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.196847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.196859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.196869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.196880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.196890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.196900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.196910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.196921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.196930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.196941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.196951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.196963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.196973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.196983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.196993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.589 [2024-04-24 21:36:00.197596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.589 [2024-04-24 21:36:00.197605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2481600 is same with the state(5) to be set 00:20:37.589 [2024-04-24 21:36:00.198807] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.589 [2024-04-24 21:36:00.198824] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.589 [2024-04-24 21:36:00.198835] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:37.590 [2024-04-24 21:36:00.198848] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:37.590 [2024-04-24 21:36:00.198861] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:37.590 [2024-04-24 21:36:00.198895] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x264ecc0 (9): Bad file descriptor 00:20:37.590 [2024-04-24 21:36:00.198909] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24953d0 (9): Bad file descriptor 00:20:37.590 [2024-04-24 21:36:00.198922] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:37.590 [2024-04-24 21:36:00.198931] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:37.590 [2024-04-24 21:36:00.198942] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:37.590 [2024-04-24 21:36:00.198956] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:37.590 [2024-04-24 21:36:00.198965] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:37.590 [2024-04-24 21:36:00.198975] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:37.590 [2024-04-24 21:36:00.199014] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:37.590 [2024-04-24 21:36:00.199029] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:37.590 [2024-04-24 21:36:00.199046] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:37.590 [2024-04-24 21:36:00.199061] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:37.590 [2024-04-24 21:36:00.199075] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:37.590 task offset: 19072 on job bdev=Nvme10n1 fails 00:20:37.590 00:20:37.590 Latency(us) 00:20:37.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.590 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.590 Job: Nvme1n1 ended in about 0.73 seconds with error 00:20:37.590 Verification LBA range: start 0x0 length 0x400 00:20:37.590 Nvme1n1 : 0.73 175.61 10.98 87.80 0.00 240228.76 18664.65 211392.92 00:20:37.590 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.590 Job: Nvme2n1 ended in about 0.75 seconds with error 00:20:37.590 Verification LBA range: start 0x0 length 0x400 00:20:37.590 Nvme2n1 : 0.75 171.61 10.73 85.81 0.00 240973.96 21390.95 216426.09 00:20:37.590 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.590 Job: Nvme3n1 ended in about 0.73 seconds with error 00:20:37.590 Verification LBA range: start 0x0 length 0x400 00:20:37.590 Nvme3n1 : 0.73 175.32 10.96 87.66 0.00 230690.82 20447.23 226492.42 00:20:37.590 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.590 Job: Nvme4n1 ended in about 0.73 seconds with error 00:20:37.590 Verification LBA range: start 0x0 length 0x400 00:20:37.590 Nvme4n1 : 0.73 262.57 16.41 87.52 0.00 169481.01 20342.37 187065.96 00:20:37.590 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.590 Job: Nvme5n1 ended in about 0.75 seconds with error 00:20:37.590 Verification LBA range: start 0x0 length 0x400 00:20:37.590 Nvme5n1 : 0.75 85.54 5.35 85.54 0.00 340128.56 38587.60 288568.12 00:20:37.590 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.590 Job: Nvme6n1 ended in about 0.75 seconds with error 00:20:37.590 Verification LBA range: start 0x0 length 0x400 00:20:37.590 Nvme6n1 : 0.75 170.55 10.66 85.28 0.00 222534.04 22439.53 212231.78 00:20:37.590 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.590 Job: Nvme7n1 ended in about 0.74 seconds with error 00:20:37.590 Verification LBA range: start 0x0 length 0x400 00:20:37.590 Nvme7n1 : 0.74 260.81 16.30 86.94 0.00 159533.47 9909.04 177838.49 00:20:37.590 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.590 Job: Nvme8n1 ended in about 0.75 seconds with error 00:20:37.590 Verification LBA range: start 0x0 length 0x400 00:20:37.590 Nvme8n1 : 0.75 170.03 10.63 85.01 0.00 213429.73 21810.38 210554.06 00:20:37.590 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.590 Job: Nvme9n1 ended in about 0.74 seconds with error 00:20:37.590 Verification LBA range: start 0x0 length 0x400 00:20:37.590 Nvme9n1 : 0.74 260.50 16.28 86.83 0.00 152342.32 9961.47 204682.04 00:20:37.590 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.590 Job: Nvme10n1 ended in about 0.73 seconds with error 00:20:37.590 Verification LBA range: start 0x0 length 0x400 00:20:37.590 Nvme10n1 : 0.73 175.96 11.00 87.98 0.00 195018.75 20552.09 236558.75 00:20:37.590 =================================================================================================================== 00:20:37.590 Total : 1908.50 119.28 866.38 0.00 207322.27 9909.04 288568.12 00:20:37.590 [2024-04-24 21:36:00.219815] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:37.590 [2024-04-24 21:36:00.219864] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:37.590 [2024-04-24 21:36:00.219889] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.590 [2024-04-24 21:36:00.219899] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.590 [2024-04-24 21:36:00.220381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.590 [2024-04-24 21:36:00.220622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.590 [2024-04-24 21:36:00.220637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2493fb0 with addr=10.0.0.2, port=4420 00:20:37.590 [2024-04-24 21:36:00.220650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2493fb0 is same with the state(5) to be set 00:20:37.590 [2024-04-24 21:36:00.220892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.590 [2024-04-24 21:36:00.221304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.590 [2024-04-24 21:36:00.221318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x264ffe0 with addr=10.0.0.2, port=4420 00:20:37.590 [2024-04-24 21:36:00.221328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x264ffe0 is same with the state(5) to be set 00:20:37.590 [2024-04-24 21:36:00.221779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.590 [2024-04-24 21:36:00.222135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.590 [2024-04-24 21:36:00.222148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x264ca30 with addr=10.0.0.2, port=4420 00:20:37.590 [2024-04-24 21:36:00.222158] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x264ca30 is same with the state(5) to be set 00:20:37.590 [2024-04-24 21:36:00.222168] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:37.590 [2024-04-24 21:36:00.222178] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:37.590 [2024-04-24 21:36:00.222190] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:37.590 [2024-04-24 21:36:00.222205] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:37.591 [2024-04-24 21:36:00.222215] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:37.591 [2024-04-24 21:36:00.222224] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:37.591 [2024-04-24 21:36:00.223132] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:37.591 [2024-04-24 21:36:00.223154] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:37.591 [2024-04-24 21:36:00.223166] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.591 [2024-04-24 21:36:00.223175] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.591 [2024-04-24 21:36:00.223580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.591 [2024-04-24 21:36:00.223939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.591 [2024-04-24 21:36:00.223954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d5fe0 with addr=10.0.0.2, port=4420 00:20:37.591 [2024-04-24 21:36:00.223965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5fe0 is same with the state(5) to be set 00:20:37.591 [2024-04-24 21:36:00.223980] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2493fb0 (9): Bad file descriptor 00:20:37.591 [2024-04-24 21:36:00.223995] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x264ffe0 (9): Bad file descriptor 00:20:37.591 [2024-04-24 21:36:00.224007] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x264ca30 (9): Bad file descriptor 00:20:37.591 [2024-04-24 21:36:00.224061] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:37.591 [2024-04-24 21:36:00.224080] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:37.591 [2024-04-24 21:36:00.224092] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:37.591 [2024-04-24 21:36:00.224549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.591 [2024-04-24 21:36:00.224968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.591 [2024-04-24 21:36:00.224982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20738e0 with addr=10.0.0.2, port=4420 00:20:37.591 [2024-04-24 21:36:00.224993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20738e0 is same with the state(5) to be set 00:20:37.591 [2024-04-24 21:36:00.225375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.591 [2024-04-24 21:36:00.225748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.591 [2024-04-24 21:36:00.225762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d5510 with addr=10.0.0.2, port=4420 00:20:37.591 [2024-04-24 21:36:00.225773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5510 is same with the state(5) to be set 00:20:37.591 [2024-04-24 21:36:00.225785] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d5fe0 (9): Bad file descriptor 00:20:37.591 [2024-04-24 21:36:00.225797] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:37.591 [2024-04-24 21:36:00.225807] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:37.591 [2024-04-24 21:36:00.225818] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:37.591 [2024-04-24 21:36:00.225830] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:37.591 [2024-04-24 21:36:00.225840] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:37.591 [2024-04-24 21:36:00.225849] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:37.591 [2024-04-24 21:36:00.225860] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:37.591 [2024-04-24 21:36:00.225869] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:37.591 [2024-04-24 21:36:00.225878] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:37.591 [2024-04-24 21:36:00.225938] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:37.591 [2024-04-24 21:36:00.225952] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:37.591 [2024-04-24 21:36:00.225963] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:37.591 [2024-04-24 21:36:00.225974] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:37.591 [2024-04-24 21:36:00.225985] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.591 [2024-04-24 21:36:00.225994] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.591 [2024-04-24 21:36:00.226001] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.591 [2024-04-24 21:36:00.226031] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20738e0 (9): Bad file descriptor 00:20:37.591 [2024-04-24 21:36:00.226043] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d5510 (9): Bad file descriptor 00:20:37.591 [2024-04-24 21:36:00.226053] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:37.591 [2024-04-24 21:36:00.226066] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:37.591 [2024-04-24 21:36:00.226076] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:37.591 [2024-04-24 21:36:00.226105] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.591 [2024-04-24 21:36:00.226454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.591 [2024-04-24 21:36:00.226698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.591 [2024-04-24 21:36:00.226713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b16f0 with addr=10.0.0.2, port=4420 00:20:37.591 [2024-04-24 21:36:00.226723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b16f0 is same with the state(5) to be set 00:20:37.591 [2024-04-24 21:36:00.227173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.591 [2024-04-24 21:36:00.227525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.591 [2024-04-24 21:36:00.227540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b22a0 with addr=10.0.0.2, port=4420 00:20:37.591 [2024-04-24 21:36:00.227551] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b22a0 is same with the state(5) to be set 00:20:37.591 [2024-04-24 21:36:00.227935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.591 [2024-04-24 21:36:00.228312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.591 [2024-04-24 21:36:00.228326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24953d0 with addr=10.0.0.2, port=4420 00:20:37.591 [2024-04-24 21:36:00.228336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24953d0 is same with the state(5) to be set 00:20:37.591 [2024-04-24 21:36:00.228746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.591 [2024-04-24 21:36:00.229182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:37.591 [2024-04-24 21:36:00.229199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x264ecc0 with addr=10.0.0.2, port=4420 00:20:37.591 [2024-04-24 21:36:00.229212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x264ecc0 is same with the state(5) to be set 00:20:37.591 [2024-04-24 21:36:00.229224] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:37.591 [2024-04-24 21:36:00.229236] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:37.591 [2024-04-24 21:36:00.229249] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:37.591 [2024-04-24 21:36:00.229263] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:37.591 [2024-04-24 21:36:00.229276] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:37.591 [2024-04-24 21:36:00.229287] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:37.591 [2024-04-24 21:36:00.229325] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.591 [2024-04-24 21:36:00.229337] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.591 [2024-04-24 21:36:00.229351] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b16f0 (9): Bad file descriptor 00:20:37.591 [2024-04-24 21:36:00.229366] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b22a0 (9): Bad file descriptor 00:20:37.591 [2024-04-24 21:36:00.229381] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24953d0 (9): Bad file descriptor 00:20:37.591 [2024-04-24 21:36:00.229396] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x264ecc0 (9): Bad file descriptor 00:20:37.591 [2024-04-24 21:36:00.229459] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:37.591 [2024-04-24 21:36:00.229474] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:37.591 [2024-04-24 21:36:00.229486] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:37.591 [2024-04-24 21:36:00.229500] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:37.591 [2024-04-24 21:36:00.229512] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:37.591 [2024-04-24 21:36:00.229524] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:37.591 [2024-04-24 21:36:00.229538] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:37.591 [2024-04-24 21:36:00.229550] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:37.591 [2024-04-24 21:36:00.229562] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:37.591 [2024-04-24 21:36:00.229576] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:37.591 [2024-04-24 21:36:00.229587] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:37.591 [2024-04-24 21:36:00.229599] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:37.591 [2024-04-24 21:36:00.229633] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.591 [2024-04-24 21:36:00.229647] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.591 [2024-04-24 21:36:00.229657] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.591 [2024-04-24 21:36:00.229668] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.851 21:36:00 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:37.851 21:36:00 -- target/shutdown.sh@139 -- # sleep 1 00:20:38.787 21:36:01 -- target/shutdown.sh@142 -- # kill -9 2912017 00:20:38.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2912017) - No such process 00:20:38.787 21:36:01 -- target/shutdown.sh@142 -- # true 00:20:38.787 21:36:01 -- target/shutdown.sh@144 -- # stoptarget 00:20:38.787 21:36:01 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:38.787 21:36:01 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:38.787 21:36:01 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:38.787 21:36:01 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:38.787 21:36:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:38.787 21:36:01 -- nvmf/common.sh@117 -- # sync 00:20:38.787 21:36:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:38.787 21:36:01 -- nvmf/common.sh@120 -- # set +e 00:20:38.787 21:36:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:38.787 21:36:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:38.787 rmmod nvme_tcp 00:20:38.787 rmmod nvme_fabrics 00:20:38.787 rmmod nvme_keyring 00:20:38.787 21:36:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:38.787 21:36:01 -- nvmf/common.sh@124 -- # set -e 00:20:38.787 21:36:01 -- nvmf/common.sh@125 -- # return 0 00:20:38.787 21:36:01 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:20:38.787 21:36:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:38.787 21:36:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:38.787 21:36:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:39.047 21:36:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.047 21:36:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:39.047 21:36:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.047 21:36:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.047 21:36:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.957 21:36:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:40.957 00:20:40.957 real 0m8.128s 00:20:40.957 user 0m20.083s 00:20:40.957 sys 0m1.675s 00:20:40.957 21:36:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:40.958 21:36:03 -- common/autotest_common.sh@10 -- # set +x 00:20:40.958 ************************************ 00:20:40.958 END TEST nvmf_shutdown_tc3 00:20:40.958 ************************************ 00:20:40.958 21:36:03 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:40.958 00:20:40.958 real 0m33.528s 00:20:40.958 user 1m20.320s 00:20:40.958 sys 0m10.619s 00:20:40.958 21:36:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:40.958 21:36:03 -- common/autotest_common.sh@10 -- # set +x 00:20:40.958 ************************************ 00:20:40.958 END TEST nvmf_shutdown 00:20:40.958 ************************************ 00:20:41.216 21:36:03 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:20:41.216 21:36:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:41.216 21:36:03 -- common/autotest_common.sh@10 -- # set +x 00:20:41.216 21:36:03 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:20:41.216 21:36:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:41.217 21:36:03 -- common/autotest_common.sh@10 -- # set +x 00:20:41.217 21:36:03 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:20:41.217 21:36:03 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:41.217 21:36:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:41.217 21:36:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:41.217 21:36:03 -- common/autotest_common.sh@10 -- # set +x 00:20:41.217 ************************************ 00:20:41.217 START TEST nvmf_multicontroller 00:20:41.217 ************************************ 00:20:41.217 21:36:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:41.478 * Looking for test storage... 00:20:41.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:41.478 21:36:04 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.478 21:36:04 -- nvmf/common.sh@7 -- # uname -s 00:20:41.478 21:36:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.478 21:36:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.478 21:36:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.478 21:36:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.478 21:36:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.478 21:36:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.478 21:36:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.478 21:36:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.478 21:36:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.478 21:36:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.478 21:36:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:41.478 21:36:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:41.478 21:36:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.478 21:36:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.478 21:36:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.478 21:36:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.478 21:36:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:41.478 21:36:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.478 21:36:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.478 21:36:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.479 21:36:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.479 21:36:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.479 21:36:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.479 21:36:04 -- paths/export.sh@5 -- # export PATH 00:20:41.479 21:36:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.479 21:36:04 -- nvmf/common.sh@47 -- # : 0 00:20:41.479 21:36:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:41.479 21:36:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:41.479 21:36:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.479 21:36:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.479 21:36:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.479 21:36:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:41.479 21:36:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:41.479 21:36:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:41.479 21:36:04 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:41.479 21:36:04 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:41.479 21:36:04 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:41.479 21:36:04 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:41.479 21:36:04 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:41.479 21:36:04 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:41.479 21:36:04 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:41.479 21:36:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:41.479 21:36:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.479 21:36:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:41.479 21:36:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:41.479 21:36:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:41.479 21:36:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.479 21:36:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.479 21:36:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.479 21:36:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:41.479 21:36:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:41.479 21:36:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:41.479 21:36:04 -- common/autotest_common.sh@10 -- # set +x 00:20:48.073 21:36:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:48.073 21:36:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:48.073 21:36:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:48.073 21:36:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:48.073 21:36:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:48.073 21:36:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:48.073 21:36:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:48.073 21:36:10 -- nvmf/common.sh@295 -- # net_devs=() 00:20:48.073 21:36:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:48.073 21:36:10 -- nvmf/common.sh@296 -- # e810=() 00:20:48.073 21:36:10 -- nvmf/common.sh@296 -- # local -ga e810 00:20:48.073 21:36:10 -- nvmf/common.sh@297 -- # x722=() 00:20:48.073 21:36:10 -- nvmf/common.sh@297 -- # local -ga x722 00:20:48.073 21:36:10 -- nvmf/common.sh@298 -- # mlx=() 00:20:48.073 21:36:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:48.073 21:36:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.073 21:36:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.073 21:36:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.073 21:36:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.073 21:36:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.073 21:36:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.073 21:36:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.073 21:36:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.073 21:36:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.073 21:36:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.073 21:36:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.073 21:36:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:48.073 21:36:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:48.073 21:36:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:48.073 21:36:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:48.073 21:36:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:48.073 21:36:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:48.073 21:36:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:48.073 21:36:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:48.073 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:48.073 21:36:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:48.073 21:36:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:48.073 21:36:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.073 21:36:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.073 21:36:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:48.073 21:36:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:48.073 21:36:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:48.073 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:48.073 21:36:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:48.073 21:36:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:48.073 21:36:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.073 21:36:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.073 21:36:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:48.073 21:36:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:48.073 21:36:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:48.073 21:36:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:48.073 21:36:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:48.073 21:36:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.073 21:36:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:48.073 21:36:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.073 21:36:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:48.073 Found net devices under 0000:af:00.0: cvl_0_0 00:20:48.073 21:36:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.073 21:36:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:48.073 21:36:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.073 21:36:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:48.074 21:36:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.074 21:36:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:48.074 Found net devices under 0000:af:00.1: cvl_0_1 00:20:48.074 21:36:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.074 21:36:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:48.074 21:36:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:48.074 21:36:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:48.074 21:36:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:48.074 21:36:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:48.074 21:36:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:48.074 21:36:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:48.074 21:36:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:48.074 21:36:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:48.074 21:36:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:48.074 21:36:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:48.074 21:36:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:48.074 21:36:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:48.074 21:36:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:48.074 21:36:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:48.074 21:36:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:48.074 21:36:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:48.074 21:36:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:48.074 21:36:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:48.074 21:36:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:48.074 21:36:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:48.074 21:36:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:48.334 21:36:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:48.334 21:36:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:48.334 21:36:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:48.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:20:48.334 00:20:48.334 --- 10.0.0.2 ping statistics --- 00:20:48.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.334 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:20:48.334 21:36:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:48.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:20:48.334 00:20:48.334 --- 10.0.0.1 ping statistics --- 00:20:48.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.334 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:20:48.334 21:36:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.334 21:36:11 -- nvmf/common.sh@411 -- # return 0 00:20:48.334 21:36:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:48.334 21:36:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.334 21:36:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:48.334 21:36:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:48.334 21:36:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.334 21:36:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:48.334 21:36:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:48.334 21:36:11 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:48.334 21:36:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:48.334 21:36:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:48.334 21:36:11 -- common/autotest_common.sh@10 -- # set +x 00:20:48.334 21:36:11 -- nvmf/common.sh@470 -- # nvmfpid=2916890 00:20:48.334 21:36:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:48.334 21:36:11 -- nvmf/common.sh@471 -- # waitforlisten 2916890 00:20:48.334 21:36:11 -- common/autotest_common.sh@817 -- # '[' -z 2916890 ']' 00:20:48.334 21:36:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.334 21:36:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:48.334 21:36:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.334 21:36:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:48.334 21:36:11 -- common/autotest_common.sh@10 -- # set +x 00:20:48.334 [2024-04-24 21:36:11.079108] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:20:48.334 [2024-04-24 21:36:11.079155] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.334 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.334 [2024-04-24 21:36:11.152609] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:48.334 [2024-04-24 21:36:11.221433] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.334 [2024-04-24 21:36:11.221476] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.334 [2024-04-24 21:36:11.221486] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.335 [2024-04-24 21:36:11.221495] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.335 [2024-04-24 21:36:11.221502] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.594 [2024-04-24 21:36:11.221607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.594 [2024-04-24 21:36:11.221703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:48.594 [2024-04-24 21:36:11.221705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.163 21:36:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:49.163 21:36:11 -- common/autotest_common.sh@850 -- # return 0 00:20:49.163 21:36:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:49.163 21:36:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:49.163 21:36:11 -- common/autotest_common.sh@10 -- # set +x 00:20:49.163 21:36:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.163 21:36:11 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:49.163 21:36:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.163 21:36:11 -- common/autotest_common.sh@10 -- # set +x 00:20:49.163 [2024-04-24 21:36:11.948968] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.163 21:36:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.163 21:36:11 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:49.163 21:36:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.163 21:36:11 -- common/autotest_common.sh@10 -- # set +x 00:20:49.163 Malloc0 00:20:49.163 21:36:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.163 21:36:11 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:49.163 21:36:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.163 21:36:11 -- common/autotest_common.sh@10 -- # set +x 00:20:49.163 21:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.164 21:36:12 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:49.164 21:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.164 21:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:49.164 21:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.164 21:36:12 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:49.164 21:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.164 21:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:49.164 [2024-04-24 21:36:12.016008] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.164 21:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.164 21:36:12 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:49.164 21:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.164 21:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:49.164 [2024-04-24 21:36:12.023916] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:49.164 21:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.164 21:36:12 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:49.164 21:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.164 21:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:49.164 Malloc1 00:20:49.164 21:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.164 21:36:12 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:49.164 21:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.164 21:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:49.424 21:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.424 21:36:12 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:49.424 21:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.424 21:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:49.424 21:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.424 21:36:12 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:49.424 21:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.424 21:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:49.424 21:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.424 21:36:12 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:49.424 21:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.424 21:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:49.424 21:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.424 21:36:12 -- host/multicontroller.sh@44 -- # bdevperf_pid=2917171 00:20:49.424 21:36:12 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:49.424 21:36:12 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:49.424 21:36:12 -- host/multicontroller.sh@47 -- # waitforlisten 2917171 /var/tmp/bdevperf.sock 00:20:49.424 21:36:12 -- common/autotest_common.sh@817 -- # '[' -z 2917171 ']' 00:20:49.424 21:36:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.424 21:36:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:49.424 21:36:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.424 21:36:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:49.424 21:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:50.366 21:36:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:50.366 21:36:12 -- common/autotest_common.sh@850 -- # return 0 00:20:50.366 21:36:12 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:50.366 21:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.366 21:36:12 -- common/autotest_common.sh@10 -- # set +x 00:20:50.366 NVMe0n1 00:20:50.366 21:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.366 21:36:13 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:50.366 21:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.366 21:36:13 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:50.366 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:50.366 21:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.366 1 00:20:50.366 21:36:13 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:50.366 21:36:13 -- common/autotest_common.sh@638 -- # local es=0 00:20:50.366 21:36:13 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:50.366 21:36:13 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:50.366 21:36:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:50.366 21:36:13 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:50.366 21:36:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:50.366 21:36:13 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:50.366 21:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.366 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:50.366 request: 00:20:50.366 { 00:20:50.366 "name": "NVMe0", 00:20:50.366 "trtype": "tcp", 00:20:50.366 "traddr": "10.0.0.2", 00:20:50.366 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:50.366 "hostaddr": "10.0.0.2", 00:20:50.366 "hostsvcid": "60000", 00:20:50.366 "adrfam": "ipv4", 00:20:50.366 "trsvcid": "4420", 00:20:50.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.366 "method": "bdev_nvme_attach_controller", 00:20:50.366 "req_id": 1 00:20:50.366 } 00:20:50.366 Got JSON-RPC error response 00:20:50.366 response: 00:20:50.366 { 00:20:50.366 "code": -114, 00:20:50.366 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:50.366 } 00:20:50.366 21:36:13 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:50.366 21:36:13 -- common/autotest_common.sh@641 -- # es=1 00:20:50.366 21:36:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:50.366 21:36:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:50.366 21:36:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:50.366 21:36:13 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:50.366 21:36:13 -- common/autotest_common.sh@638 -- # local es=0 00:20:50.366 21:36:13 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:50.366 21:36:13 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:50.366 21:36:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:50.366 21:36:13 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:50.366 21:36:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:50.366 21:36:13 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:50.366 21:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.366 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:50.366 request: 00:20:50.366 { 00:20:50.366 "name": "NVMe0", 00:20:50.366 "trtype": "tcp", 00:20:50.366 "traddr": "10.0.0.2", 00:20:50.366 "hostaddr": "10.0.0.2", 00:20:50.366 "hostsvcid": "60000", 00:20:50.366 "adrfam": "ipv4", 00:20:50.366 "trsvcid": "4420", 00:20:50.366 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:50.366 "method": "bdev_nvme_attach_controller", 00:20:50.366 "req_id": 1 00:20:50.366 } 00:20:50.366 Got JSON-RPC error response 00:20:50.366 response: 00:20:50.366 { 00:20:50.366 "code": -114, 00:20:50.366 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:50.366 } 00:20:50.366 21:36:13 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:50.366 21:36:13 -- common/autotest_common.sh@641 -- # es=1 00:20:50.366 21:36:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:50.366 21:36:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:50.366 21:36:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:50.366 21:36:13 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:50.366 21:36:13 -- common/autotest_common.sh@638 -- # local es=0 00:20:50.366 21:36:13 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:50.366 21:36:13 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:50.366 21:36:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:50.366 21:36:13 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:50.366 21:36:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:50.366 21:36:13 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:50.366 21:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.366 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:50.366 request: 00:20:50.366 { 00:20:50.366 "name": "NVMe0", 00:20:50.366 "trtype": "tcp", 00:20:50.366 "traddr": "10.0.0.2", 00:20:50.366 "hostaddr": "10.0.0.2", 00:20:50.366 "hostsvcid": "60000", 00:20:50.366 "adrfam": "ipv4", 00:20:50.366 "trsvcid": "4420", 00:20:50.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.366 "multipath": "disable", 00:20:50.366 "method": "bdev_nvme_attach_controller", 00:20:50.366 "req_id": 1 00:20:50.367 } 00:20:50.367 Got JSON-RPC error response 00:20:50.367 response: 00:20:50.367 { 00:20:50.367 "code": -114, 00:20:50.367 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:50.367 } 00:20:50.367 21:36:13 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:50.367 21:36:13 -- common/autotest_common.sh@641 -- # es=1 00:20:50.367 21:36:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:50.367 21:36:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:50.367 21:36:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:50.367 21:36:13 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:50.367 21:36:13 -- common/autotest_common.sh@638 -- # local es=0 00:20:50.367 21:36:13 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:50.367 21:36:13 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:50.367 21:36:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:50.367 21:36:13 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:50.367 21:36:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:50.367 21:36:13 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:50.367 21:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.367 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:50.367 request: 00:20:50.367 { 00:20:50.367 "name": "NVMe0", 00:20:50.367 "trtype": "tcp", 00:20:50.367 "traddr": "10.0.0.2", 00:20:50.367 "hostaddr": "10.0.0.2", 00:20:50.367 "hostsvcid": "60000", 00:20:50.367 "adrfam": "ipv4", 00:20:50.367 "trsvcid": "4420", 00:20:50.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.367 "multipath": "failover", 00:20:50.367 "method": "bdev_nvme_attach_controller", 00:20:50.367 "req_id": 1 00:20:50.367 } 00:20:50.367 Got JSON-RPC error response 00:20:50.367 response: 00:20:50.367 { 00:20:50.367 "code": -114, 00:20:50.367 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:50.367 } 00:20:50.367 21:36:13 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:50.367 21:36:13 -- common/autotest_common.sh@641 -- # es=1 00:20:50.367 21:36:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:50.367 21:36:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:50.367 21:36:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:50.367 21:36:13 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:50.367 21:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.367 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:50.627 00:20:50.627 21:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.627 21:36:13 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:50.627 21:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.627 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:50.627 21:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.627 21:36:13 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:50.627 21:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.627 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:50.886 00:20:50.886 21:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.886 21:36:13 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:50.886 21:36:13 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:50.886 21:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.886 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:50.886 21:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.886 21:36:13 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:50.886 21:36:13 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:51.823 0 00:20:52.083 21:36:14 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:52.083 21:36:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.083 21:36:14 -- common/autotest_common.sh@10 -- # set +x 00:20:52.083 21:36:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.083 21:36:14 -- host/multicontroller.sh@100 -- # killprocess 2917171 00:20:52.083 21:36:14 -- common/autotest_common.sh@936 -- # '[' -z 2917171 ']' 00:20:52.083 21:36:14 -- common/autotest_common.sh@940 -- # kill -0 2917171 00:20:52.083 21:36:14 -- common/autotest_common.sh@941 -- # uname 00:20:52.083 21:36:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:52.083 21:36:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2917171 00:20:52.083 21:36:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:52.083 21:36:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:52.083 21:36:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2917171' 00:20:52.083 killing process with pid 2917171 00:20:52.083 21:36:14 -- common/autotest_common.sh@955 -- # kill 2917171 00:20:52.083 21:36:14 -- common/autotest_common.sh@960 -- # wait 2917171 00:20:52.343 21:36:14 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:52.343 21:36:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.343 21:36:14 -- common/autotest_common.sh@10 -- # set +x 00:20:52.343 21:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.343 21:36:15 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:52.343 21:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.343 21:36:15 -- common/autotest_common.sh@10 -- # set +x 00:20:52.343 21:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.343 21:36:15 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:52.343 21:36:15 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:52.343 21:36:15 -- common/autotest_common.sh@1598 -- # read -r file 00:20:52.343 21:36:15 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:52.343 21:36:15 -- common/autotest_common.sh@1597 -- # sort -u 00:20:52.343 21:36:15 -- common/autotest_common.sh@1599 -- # cat 00:20:52.343 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:52.343 [2024-04-24 21:36:12.129639] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:20:52.343 [2024-04-24 21:36:12.129688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2917171 ] 00:20:52.343 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.343 [2024-04-24 21:36:12.199046] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.343 [2024-04-24 21:36:12.267734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.343 [2024-04-24 21:36:13.587260] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name ea47f428-9979-42e1-ba76-6eebcc6a9960 already exists 00:20:52.343 [2024-04-24 21:36:13.587292] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:ea47f428-9979-42e1-ba76-6eebcc6a9960 alias for bdev NVMe1n1 00:20:52.343 [2024-04-24 21:36:13.587305] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:52.343 Running I/O for 1 seconds... 00:20:52.343 00:20:52.343 Latency(us) 00:20:52.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.343 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:52.343 NVMe0n1 : 1.01 23578.90 92.11 0.00 0.00 5411.80 3329.23 27053.26 00:20:52.344 =================================================================================================================== 00:20:52.344 Total : 23578.90 92.11 0.00 0.00 5411.80 3329.23 27053.26 00:20:52.344 Received shutdown signal, test time was about 1.000000 seconds 00:20:52.344 00:20:52.344 Latency(us) 00:20:52.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.344 =================================================================================================================== 00:20:52.344 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:52.344 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:52.344 21:36:15 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:52.344 21:36:15 -- common/autotest_common.sh@1598 -- # read -r file 00:20:52.344 21:36:15 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:52.344 21:36:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:52.344 21:36:15 -- nvmf/common.sh@117 -- # sync 00:20:52.344 21:36:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:52.344 21:36:15 -- nvmf/common.sh@120 -- # set +e 00:20:52.344 21:36:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:52.344 21:36:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:52.344 rmmod nvme_tcp 00:20:52.344 rmmod nvme_fabrics 00:20:52.344 rmmod nvme_keyring 00:20:52.344 21:36:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:52.344 21:36:15 -- nvmf/common.sh@124 -- # set -e 00:20:52.344 21:36:15 -- nvmf/common.sh@125 -- # return 0 00:20:52.344 21:36:15 -- nvmf/common.sh@478 -- # '[' -n 2916890 ']' 00:20:52.344 21:36:15 -- nvmf/common.sh@479 -- # killprocess 2916890 00:20:52.344 21:36:15 -- common/autotest_common.sh@936 -- # '[' -z 2916890 ']' 00:20:52.344 21:36:15 -- common/autotest_common.sh@940 -- # kill -0 2916890 00:20:52.344 21:36:15 -- common/autotest_common.sh@941 -- # uname 00:20:52.344 21:36:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:52.344 21:36:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2916890 00:20:52.344 21:36:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:52.344 21:36:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:52.344 21:36:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2916890' 00:20:52.344 killing process with pid 2916890 00:20:52.344 21:36:15 -- common/autotest_common.sh@955 -- # kill 2916890 00:20:52.344 21:36:15 -- common/autotest_common.sh@960 -- # wait 2916890 00:20:52.603 21:36:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:52.603 21:36:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:52.603 21:36:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:52.603 21:36:15 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:52.603 21:36:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:52.603 21:36:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.603 21:36:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.603 21:36:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.143 21:36:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:55.143 00:20:55.143 real 0m13.422s 00:20:55.143 user 0m17.367s 00:20:55.143 sys 0m6.186s 00:20:55.143 21:36:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:55.143 21:36:17 -- common/autotest_common.sh@10 -- # set +x 00:20:55.143 ************************************ 00:20:55.143 END TEST nvmf_multicontroller 00:20:55.143 ************************************ 00:20:55.143 21:36:17 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:55.143 21:36:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:55.143 21:36:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:55.143 21:36:17 -- common/autotest_common.sh@10 -- # set +x 00:20:55.143 ************************************ 00:20:55.143 START TEST nvmf_aer 00:20:55.143 ************************************ 00:20:55.143 21:36:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:55.143 * Looking for test storage... 00:20:55.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:55.143 21:36:17 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:55.143 21:36:17 -- nvmf/common.sh@7 -- # uname -s 00:20:55.143 21:36:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.143 21:36:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.143 21:36:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.143 21:36:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.143 21:36:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.143 21:36:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.143 21:36:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.143 21:36:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.143 21:36:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.143 21:36:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.143 21:36:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:55.143 21:36:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:55.143 21:36:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.143 21:36:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.143 21:36:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:55.143 21:36:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.143 21:36:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:55.143 21:36:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.143 21:36:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.143 21:36:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.143 21:36:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.143 21:36:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.143 21:36:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.143 21:36:17 -- paths/export.sh@5 -- # export PATH 00:20:55.143 21:36:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.143 21:36:17 -- nvmf/common.sh@47 -- # : 0 00:20:55.143 21:36:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:55.143 21:36:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:55.143 21:36:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:55.143 21:36:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.143 21:36:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.143 21:36:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:55.143 21:36:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:55.143 21:36:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:55.143 21:36:17 -- host/aer.sh@11 -- # nvmftestinit 00:20:55.143 21:36:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:55.143 21:36:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.143 21:36:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:55.143 21:36:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:55.143 21:36:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:55.143 21:36:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.143 21:36:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:55.143 21:36:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.143 21:36:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:55.143 21:36:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:55.143 21:36:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:55.143 21:36:17 -- common/autotest_common.sh@10 -- # set +x 00:21:01.733 21:36:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:01.733 21:36:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:01.733 21:36:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:01.733 21:36:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:01.733 21:36:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:01.733 21:36:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:01.733 21:36:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:01.733 21:36:24 -- nvmf/common.sh@295 -- # net_devs=() 00:21:01.733 21:36:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:01.733 21:36:24 -- nvmf/common.sh@296 -- # e810=() 00:21:01.733 21:36:24 -- nvmf/common.sh@296 -- # local -ga e810 00:21:01.733 21:36:24 -- nvmf/common.sh@297 -- # x722=() 00:21:01.733 21:36:24 -- nvmf/common.sh@297 -- # local -ga x722 00:21:01.733 21:36:24 -- nvmf/common.sh@298 -- # mlx=() 00:21:01.733 21:36:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:01.733 21:36:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.733 21:36:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.733 21:36:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.733 21:36:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.733 21:36:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.733 21:36:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.733 21:36:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.733 21:36:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.733 21:36:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.733 21:36:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.733 21:36:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.733 21:36:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:01.733 21:36:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:01.733 21:36:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:01.733 21:36:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:01.733 21:36:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:01.733 21:36:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:01.733 21:36:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.733 21:36:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:01.733 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:01.733 21:36:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.733 21:36:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.733 21:36:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.733 21:36:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.733 21:36:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.733 21:36:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.733 21:36:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:01.733 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:01.733 21:36:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.733 21:36:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.733 21:36:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.733 21:36:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.733 21:36:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.733 21:36:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:01.733 21:36:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:01.733 21:36:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:01.733 21:36:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.733 21:36:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.733 21:36:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:01.733 21:36:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.733 21:36:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:01.733 Found net devices under 0000:af:00.0: cvl_0_0 00:21:01.733 21:36:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.733 21:36:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.733 21:36:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.733 21:36:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:01.733 21:36:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.733 21:36:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:01.733 Found net devices under 0000:af:00.1: cvl_0_1 00:21:01.733 21:36:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.733 21:36:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:01.733 21:36:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:01.733 21:36:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:01.733 21:36:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:01.733 21:36:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:01.733 21:36:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.733 21:36:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.733 21:36:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.733 21:36:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:01.733 21:36:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:01.733 21:36:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:01.733 21:36:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:01.733 21:36:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:01.734 21:36:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.734 21:36:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:01.734 21:36:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:01.734 21:36:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:01.734 21:36:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:01.734 21:36:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:01.734 21:36:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:01.734 21:36:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:01.734 21:36:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:01.734 21:36:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:01.734 21:36:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:01.734 21:36:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:01.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:21:01.734 00:21:01.734 --- 10.0.0.2 ping statistics --- 00:21:01.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.734 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:21:01.734 21:36:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:01.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:21:01.734 00:21:01.734 --- 10.0.0.1 ping statistics --- 00:21:01.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.734 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:21:01.734 21:36:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.734 21:36:24 -- nvmf/common.sh@411 -- # return 0 00:21:01.734 21:36:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:01.734 21:36:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.734 21:36:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:01.734 21:36:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:01.734 21:36:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.734 21:36:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:01.734 21:36:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:01.734 21:36:24 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:01.734 21:36:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:01.734 21:36:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:01.734 21:36:24 -- common/autotest_common.sh@10 -- # set +x 00:21:01.734 21:36:24 -- nvmf/common.sh@470 -- # nvmfpid=2921417 00:21:01.734 21:36:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:01.734 21:36:24 -- nvmf/common.sh@471 -- # waitforlisten 2921417 00:21:01.734 21:36:24 -- common/autotest_common.sh@817 -- # '[' -z 2921417 ']' 00:21:01.734 21:36:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.734 21:36:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:01.734 21:36:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.734 21:36:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:01.734 21:36:24 -- common/autotest_common.sh@10 -- # set +x 00:21:01.734 [2024-04-24 21:36:24.485429] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:21:01.734 [2024-04-24 21:36:24.485483] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.734 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.734 [2024-04-24 21:36:24.559395] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:01.993 [2024-04-24 21:36:24.634062] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.993 [2024-04-24 21:36:24.634097] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.993 [2024-04-24 21:36:24.634108] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.993 [2024-04-24 21:36:24.634117] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.993 [2024-04-24 21:36:24.634124] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.993 [2024-04-24 21:36:24.634164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.993 [2024-04-24 21:36:24.634262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.993 [2024-04-24 21:36:24.634323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:01.993 [2024-04-24 21:36:24.634325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.562 21:36:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:02.562 21:36:25 -- common/autotest_common.sh@850 -- # return 0 00:21:02.562 21:36:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:02.562 21:36:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:02.562 21:36:25 -- common/autotest_common.sh@10 -- # set +x 00:21:02.562 21:36:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.562 21:36:25 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:02.562 21:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.562 21:36:25 -- common/autotest_common.sh@10 -- # set +x 00:21:02.562 [2024-04-24 21:36:25.343175] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.562 21:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.562 21:36:25 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:02.562 21:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.562 21:36:25 -- common/autotest_common.sh@10 -- # set +x 00:21:02.562 Malloc0 00:21:02.562 21:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.562 21:36:25 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:02.562 21:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.562 21:36:25 -- common/autotest_common.sh@10 -- # set +x 00:21:02.562 21:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.562 21:36:25 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:02.562 21:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.562 21:36:25 -- common/autotest_common.sh@10 -- # set +x 00:21:02.562 21:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.562 21:36:25 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:02.562 21:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.562 21:36:25 -- common/autotest_common.sh@10 -- # set +x 00:21:02.562 [2024-04-24 21:36:25.401694] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.562 21:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.562 21:36:25 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:02.562 21:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.562 21:36:25 -- common/autotest_common.sh@10 -- # set +x 00:21:02.562 [2024-04-24 21:36:25.409471] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:02.562 [ 00:21:02.562 { 00:21:02.562 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:02.562 "subtype": "Discovery", 00:21:02.562 "listen_addresses": [], 00:21:02.562 "allow_any_host": true, 00:21:02.562 "hosts": [] 00:21:02.562 }, 00:21:02.562 { 00:21:02.562 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.562 "subtype": "NVMe", 00:21:02.562 "listen_addresses": [ 00:21:02.562 { 00:21:02.562 "transport": "TCP", 00:21:02.562 "trtype": "TCP", 00:21:02.562 "adrfam": "IPv4", 00:21:02.562 "traddr": "10.0.0.2", 00:21:02.562 "trsvcid": "4420" 00:21:02.562 } 00:21:02.562 ], 00:21:02.562 "allow_any_host": true, 00:21:02.562 "hosts": [], 00:21:02.562 "serial_number": "SPDK00000000000001", 00:21:02.562 "model_number": "SPDK bdev Controller", 00:21:02.562 "max_namespaces": 2, 00:21:02.562 "min_cntlid": 1, 00:21:02.562 "max_cntlid": 65519, 00:21:02.562 "namespaces": [ 00:21:02.562 { 00:21:02.562 "nsid": 1, 00:21:02.562 "bdev_name": "Malloc0", 00:21:02.562 "name": "Malloc0", 00:21:02.562 "nguid": "DA010E9BEDCE49369843592150EB9998", 00:21:02.562 "uuid": "da010e9b-edce-4936-9843-592150eb9998" 00:21:02.562 } 00:21:02.562 ] 00:21:02.562 } 00:21:02.562 ] 00:21:02.562 21:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.562 21:36:25 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:02.562 21:36:25 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:02.562 21:36:25 -- host/aer.sh@33 -- # aerpid=2921486 00:21:02.562 21:36:25 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:02.562 21:36:25 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:02.562 21:36:25 -- common/autotest_common.sh@1251 -- # local i=0 00:21:02.562 21:36:25 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:02.562 21:36:25 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:21:02.562 21:36:25 -- common/autotest_common.sh@1254 -- # i=1 00:21:02.562 21:36:25 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:02.822 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.822 21:36:25 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:02.822 21:36:25 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:21:02.822 21:36:25 -- common/autotest_common.sh@1254 -- # i=2 00:21:02.822 21:36:25 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:02.822 21:36:25 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:02.822 21:36:25 -- common/autotest_common.sh@1253 -- # '[' 2 -lt 200 ']' 00:21:02.822 21:36:25 -- common/autotest_common.sh@1254 -- # i=3 00:21:02.822 21:36:25 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:03.081 21:36:25 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:03.081 21:36:25 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:03.081 21:36:25 -- common/autotest_common.sh@1262 -- # return 0 00:21:03.081 21:36:25 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:03.081 21:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:03.081 21:36:25 -- common/autotest_common.sh@10 -- # set +x 00:21:03.081 Malloc1 00:21:03.081 21:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:03.081 21:36:25 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:03.081 21:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:03.081 21:36:25 -- common/autotest_common.sh@10 -- # set +x 00:21:03.081 21:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:03.081 21:36:25 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:03.081 21:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:03.081 21:36:25 -- common/autotest_common.sh@10 -- # set +x 00:21:03.081 [ 00:21:03.081 { 00:21:03.081 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:03.081 "subtype": "Discovery", 00:21:03.081 "listen_addresses": [], 00:21:03.081 "allow_any_host": true, 00:21:03.081 "hosts": [] 00:21:03.081 }, 00:21:03.081 { 00:21:03.081 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.081 "subtype": "NVMe", 00:21:03.081 "listen_addresses": [ 00:21:03.081 { 00:21:03.081 "transport": "TCP", 00:21:03.081 "trtype": "TCP", 00:21:03.081 "adrfam": "IPv4", 00:21:03.081 "traddr": "10.0.0.2", 00:21:03.081 "trsvcid": "4420" 00:21:03.081 } 00:21:03.081 ], 00:21:03.081 "allow_any_host": true, 00:21:03.081 "hosts": [], 00:21:03.081 "serial_number": "SPDK00000000000001", 00:21:03.081 Asynchronous Event Request test 00:21:03.081 Attaching to 10.0.0.2 00:21:03.081 Attached to 10.0.0.2 00:21:03.081 Registering asynchronous event callbacks... 00:21:03.081 Starting namespace attribute notice tests for all controllers... 00:21:03.081 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:03.081 aer_cb - Changed Namespace 00:21:03.081 Cleaning up... 00:21:03.081 "model_number": "SPDK bdev Controller", 00:21:03.081 "max_namespaces": 2, 00:21:03.081 "min_cntlid": 1, 00:21:03.081 "max_cntlid": 65519, 00:21:03.081 "namespaces": [ 00:21:03.081 { 00:21:03.081 "nsid": 1, 00:21:03.081 "bdev_name": "Malloc0", 00:21:03.081 "name": "Malloc0", 00:21:03.081 "nguid": "DA010E9BEDCE49369843592150EB9998", 00:21:03.081 "uuid": "da010e9b-edce-4936-9843-592150eb9998" 00:21:03.081 }, 00:21:03.081 { 00:21:03.081 "nsid": 2, 00:21:03.081 "bdev_name": "Malloc1", 00:21:03.081 "name": "Malloc1", 00:21:03.081 "nguid": "4B80C0F4A355433B916B9DB1F3DD1375", 00:21:03.081 "uuid": "4b80c0f4-a355-433b-916b-9db1f3dd1375" 00:21:03.081 } 00:21:03.081 ] 00:21:03.081 } 00:21:03.081 ] 00:21:03.081 21:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:03.081 21:36:25 -- host/aer.sh@43 -- # wait 2921486 00:21:03.081 21:36:25 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:03.081 21:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:03.081 21:36:25 -- common/autotest_common.sh@10 -- # set +x 00:21:03.081 21:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:03.081 21:36:25 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:03.081 21:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:03.081 21:36:25 -- common/autotest_common.sh@10 -- # set +x 00:21:03.081 21:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:03.081 21:36:25 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:03.081 21:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:03.081 21:36:25 -- common/autotest_common.sh@10 -- # set +x 00:21:03.081 21:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:03.081 21:36:25 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:03.081 21:36:25 -- host/aer.sh@51 -- # nvmftestfini 00:21:03.081 21:36:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:03.081 21:36:25 -- nvmf/common.sh@117 -- # sync 00:21:03.081 21:36:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:03.081 21:36:25 -- nvmf/common.sh@120 -- # set +e 00:21:03.081 21:36:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:03.081 21:36:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:03.081 rmmod nvme_tcp 00:21:03.081 rmmod nvme_fabrics 00:21:03.081 rmmod nvme_keyring 00:21:03.081 21:36:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:03.081 21:36:25 -- nvmf/common.sh@124 -- # set -e 00:21:03.081 21:36:25 -- nvmf/common.sh@125 -- # return 0 00:21:03.081 21:36:25 -- nvmf/common.sh@478 -- # '[' -n 2921417 ']' 00:21:03.081 21:36:25 -- nvmf/common.sh@479 -- # killprocess 2921417 00:21:03.081 21:36:25 -- common/autotest_common.sh@936 -- # '[' -z 2921417 ']' 00:21:03.081 21:36:25 -- common/autotest_common.sh@940 -- # kill -0 2921417 00:21:03.081 21:36:25 -- common/autotest_common.sh@941 -- # uname 00:21:03.081 21:36:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:03.081 21:36:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2921417 00:21:03.347 21:36:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:03.347 21:36:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:03.347 21:36:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2921417' 00:21:03.347 killing process with pid 2921417 00:21:03.347 21:36:26 -- common/autotest_common.sh@955 -- # kill 2921417 00:21:03.347 [2024-04-24 21:36:26.009154] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:03.347 21:36:26 -- common/autotest_common.sh@960 -- # wait 2921417 00:21:03.347 21:36:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:03.347 21:36:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:03.347 21:36:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:03.347 21:36:26 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:03.347 21:36:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:03.347 21:36:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.347 21:36:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.347 21:36:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.889 21:36:28 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:05.889 00:21:05.889 real 0m10.594s 00:21:05.889 user 0m8.056s 00:21:05.889 sys 0m5.561s 00:21:05.889 21:36:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:05.889 21:36:28 -- common/autotest_common.sh@10 -- # set +x 00:21:05.889 ************************************ 00:21:05.889 END TEST nvmf_aer 00:21:05.889 ************************************ 00:21:05.889 21:36:28 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:05.889 21:36:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:05.889 21:36:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:05.889 21:36:28 -- common/autotest_common.sh@10 -- # set +x 00:21:05.889 ************************************ 00:21:05.889 START TEST nvmf_async_init 00:21:05.889 ************************************ 00:21:05.889 21:36:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:05.889 * Looking for test storage... 00:21:05.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:05.889 21:36:28 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.889 21:36:28 -- nvmf/common.sh@7 -- # uname -s 00:21:05.889 21:36:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.889 21:36:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.889 21:36:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.889 21:36:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.889 21:36:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.889 21:36:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.889 21:36:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.889 21:36:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.889 21:36:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.889 21:36:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.889 21:36:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:05.889 21:36:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:05.889 21:36:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.889 21:36:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.889 21:36:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.889 21:36:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.889 21:36:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.889 21:36:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.889 21:36:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.889 21:36:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.889 21:36:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.889 21:36:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.889 21:36:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.889 21:36:28 -- paths/export.sh@5 -- # export PATH 00:21:05.889 21:36:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.889 21:36:28 -- nvmf/common.sh@47 -- # : 0 00:21:05.889 21:36:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:05.889 21:36:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:05.889 21:36:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.889 21:36:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.889 21:36:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.889 21:36:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:05.889 21:36:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:05.889 21:36:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:05.889 21:36:28 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:05.889 21:36:28 -- host/async_init.sh@14 -- # null_block_size=512 00:21:05.889 21:36:28 -- host/async_init.sh@15 -- # null_bdev=null0 00:21:05.889 21:36:28 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:05.889 21:36:28 -- host/async_init.sh@20 -- # uuidgen 00:21:05.889 21:36:28 -- host/async_init.sh@20 -- # tr -d - 00:21:05.889 21:36:28 -- host/async_init.sh@20 -- # nguid=7365f653726f44c684e496ddabf21b1f 00:21:05.889 21:36:28 -- host/async_init.sh@22 -- # nvmftestinit 00:21:05.889 21:36:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:05.889 21:36:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.889 21:36:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:05.889 21:36:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:05.889 21:36:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:05.889 21:36:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.889 21:36:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.889 21:36:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.889 21:36:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:05.889 21:36:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:05.889 21:36:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:05.889 21:36:28 -- common/autotest_common.sh@10 -- # set +x 00:21:12.461 21:36:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:12.461 21:36:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:12.461 21:36:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:12.461 21:36:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:12.461 21:36:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:12.461 21:36:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:12.461 21:36:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:12.461 21:36:34 -- nvmf/common.sh@295 -- # net_devs=() 00:21:12.461 21:36:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:12.461 21:36:34 -- nvmf/common.sh@296 -- # e810=() 00:21:12.461 21:36:34 -- nvmf/common.sh@296 -- # local -ga e810 00:21:12.461 21:36:34 -- nvmf/common.sh@297 -- # x722=() 00:21:12.461 21:36:34 -- nvmf/common.sh@297 -- # local -ga x722 00:21:12.461 21:36:34 -- nvmf/common.sh@298 -- # mlx=() 00:21:12.461 21:36:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:12.461 21:36:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.461 21:36:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.461 21:36:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.461 21:36:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.461 21:36:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.461 21:36:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.461 21:36:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.461 21:36:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.461 21:36:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.461 21:36:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.461 21:36:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.461 21:36:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:12.461 21:36:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:12.461 21:36:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:12.461 21:36:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:12.461 21:36:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:12.461 21:36:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:12.461 21:36:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.461 21:36:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:12.461 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:12.461 21:36:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.461 21:36:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.461 21:36:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.461 21:36:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.461 21:36:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.461 21:36:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.461 21:36:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:12.461 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:12.461 21:36:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.461 21:36:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.461 21:36:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.461 21:36:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.461 21:36:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.461 21:36:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:12.461 21:36:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:12.461 21:36:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:12.461 21:36:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.461 21:36:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.461 21:36:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:12.461 21:36:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.461 21:36:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:12.461 Found net devices under 0000:af:00.0: cvl_0_0 00:21:12.461 21:36:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.461 21:36:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.461 21:36:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.461 21:36:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:12.461 21:36:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.461 21:36:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:12.461 Found net devices under 0000:af:00.1: cvl_0_1 00:21:12.461 21:36:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.461 21:36:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:12.461 21:36:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:12.461 21:36:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:12.461 21:36:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:12.461 21:36:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:12.461 21:36:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.462 21:36:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.462 21:36:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.462 21:36:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:12.462 21:36:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.462 21:36:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.462 21:36:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:12.462 21:36:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.462 21:36:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.462 21:36:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:12.462 21:36:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:12.462 21:36:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.462 21:36:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.462 21:36:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.462 21:36:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.462 21:36:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:12.462 21:36:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.462 21:36:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.462 21:36:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.462 21:36:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:12.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:21:12.462 00:21:12.462 --- 10.0.0.2 ping statistics --- 00:21:12.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.462 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:21:12.462 21:36:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:21:12.462 00:21:12.462 --- 10.0.0.1 ping statistics --- 00:21:12.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.462 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:21:12.462 21:36:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.462 21:36:34 -- nvmf/common.sh@411 -- # return 0 00:21:12.462 21:36:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:12.462 21:36:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.462 21:36:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:12.462 21:36:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:12.462 21:36:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.462 21:36:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:12.462 21:36:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:12.462 21:36:34 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:12.462 21:36:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:12.462 21:36:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:12.462 21:36:34 -- common/autotest_common.sh@10 -- # set +x 00:21:12.462 21:36:34 -- nvmf/common.sh@470 -- # nvmfpid=2925209 00:21:12.462 21:36:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:12.462 21:36:34 -- nvmf/common.sh@471 -- # waitforlisten 2925209 00:21:12.462 21:36:34 -- common/autotest_common.sh@817 -- # '[' -z 2925209 ']' 00:21:12.462 21:36:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.462 21:36:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:12.462 21:36:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.462 21:36:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:12.462 21:36:34 -- common/autotest_common.sh@10 -- # set +x 00:21:12.462 [2024-04-24 21:36:34.811775] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:21:12.462 [2024-04-24 21:36:34.811825] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.462 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.462 [2024-04-24 21:36:34.886247] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.462 [2024-04-24 21:36:34.957445] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.462 [2024-04-24 21:36:34.957490] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.462 [2024-04-24 21:36:34.957500] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.462 [2024-04-24 21:36:34.957508] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.462 [2024-04-24 21:36:34.957516] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.462 [2024-04-24 21:36:34.957540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.721 21:36:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:12.721 21:36:35 -- common/autotest_common.sh@850 -- # return 0 00:21:12.721 21:36:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:12.721 21:36:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:12.721 21:36:35 -- common/autotest_common.sh@10 -- # set +x 00:21:12.979 21:36:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.979 21:36:35 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:12.979 21:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.979 21:36:35 -- common/autotest_common.sh@10 -- # set +x 00:21:12.979 [2024-04-24 21:36:35.651769] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.979 21:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.979 21:36:35 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:12.979 21:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.979 21:36:35 -- common/autotest_common.sh@10 -- # set +x 00:21:12.979 null0 00:21:12.979 21:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.979 21:36:35 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:12.979 21:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.979 21:36:35 -- common/autotest_common.sh@10 -- # set +x 00:21:12.979 21:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.979 21:36:35 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:12.980 21:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.980 21:36:35 -- common/autotest_common.sh@10 -- # set +x 00:21:12.980 21:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.980 21:36:35 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7365f653726f44c684e496ddabf21b1f 00:21:12.980 21:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.980 21:36:35 -- common/autotest_common.sh@10 -- # set +x 00:21:12.980 21:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.980 21:36:35 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:12.980 21:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.980 21:36:35 -- common/autotest_common.sh@10 -- # set +x 00:21:12.980 [2024-04-24 21:36:35.692020] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.980 21:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.980 21:36:35 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:12.980 21:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.980 21:36:35 -- common/autotest_common.sh@10 -- # set +x 00:21:13.239 nvme0n1 00:21:13.239 21:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.239 21:36:35 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:13.239 21:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.239 21:36:35 -- common/autotest_common.sh@10 -- # set +x 00:21:13.239 [ 00:21:13.239 { 00:21:13.239 "name": "nvme0n1", 00:21:13.239 "aliases": [ 00:21:13.239 "7365f653-726f-44c6-84e4-96ddabf21b1f" 00:21:13.239 ], 00:21:13.239 "product_name": "NVMe disk", 00:21:13.239 "block_size": 512, 00:21:13.239 "num_blocks": 2097152, 00:21:13.239 "uuid": "7365f653-726f-44c6-84e4-96ddabf21b1f", 00:21:13.239 "assigned_rate_limits": { 00:21:13.239 "rw_ios_per_sec": 0, 00:21:13.239 "rw_mbytes_per_sec": 0, 00:21:13.239 "r_mbytes_per_sec": 0, 00:21:13.239 "w_mbytes_per_sec": 0 00:21:13.239 }, 00:21:13.239 "claimed": false, 00:21:13.239 "zoned": false, 00:21:13.239 "supported_io_types": { 00:21:13.239 "read": true, 00:21:13.239 "write": true, 00:21:13.239 "unmap": false, 00:21:13.239 "write_zeroes": true, 00:21:13.239 "flush": true, 00:21:13.239 "reset": true, 00:21:13.239 "compare": true, 00:21:13.239 "compare_and_write": true, 00:21:13.239 "abort": true, 00:21:13.239 "nvme_admin": true, 00:21:13.239 "nvme_io": true 00:21:13.239 }, 00:21:13.239 "memory_domains": [ 00:21:13.239 { 00:21:13.239 "dma_device_id": "system", 00:21:13.239 "dma_device_type": 1 00:21:13.239 } 00:21:13.239 ], 00:21:13.239 "driver_specific": { 00:21:13.239 "nvme": [ 00:21:13.239 { 00:21:13.239 "trid": { 00:21:13.239 "trtype": "TCP", 00:21:13.239 "adrfam": "IPv4", 00:21:13.239 "traddr": "10.0.0.2", 00:21:13.239 "trsvcid": "4420", 00:21:13.239 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:13.239 }, 00:21:13.239 "ctrlr_data": { 00:21:13.239 "cntlid": 1, 00:21:13.239 "vendor_id": "0x8086", 00:21:13.239 "model_number": "SPDK bdev Controller", 00:21:13.239 "serial_number": "00000000000000000000", 00:21:13.239 "firmware_revision": "24.05", 00:21:13.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:13.239 "oacs": { 00:21:13.239 "security": 0, 00:21:13.239 "format": 0, 00:21:13.239 "firmware": 0, 00:21:13.239 "ns_manage": 0 00:21:13.239 }, 00:21:13.239 "multi_ctrlr": true, 00:21:13.239 "ana_reporting": false 00:21:13.239 }, 00:21:13.239 "vs": { 00:21:13.239 "nvme_version": "1.3" 00:21:13.239 }, 00:21:13.239 "ns_data": { 00:21:13.239 "id": 1, 00:21:13.239 "can_share": true 00:21:13.239 } 00:21:13.239 } 00:21:13.239 ], 00:21:13.239 "mp_policy": "active_passive" 00:21:13.239 } 00:21:13.239 } 00:21:13.239 ] 00:21:13.239 21:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.239 21:36:35 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:13.239 21:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.239 21:36:35 -- common/autotest_common.sh@10 -- # set +x 00:21:13.239 [2024-04-24 21:36:35.940511] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:13.239 [2024-04-24 21:36:35.940563] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x137f120 (9): Bad file descriptor 00:21:13.239 [2024-04-24 21:36:36.072532] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:13.239 21:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.239 21:36:36 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:13.239 21:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.239 21:36:36 -- common/autotest_common.sh@10 -- # set +x 00:21:13.239 [ 00:21:13.239 { 00:21:13.239 "name": "nvme0n1", 00:21:13.239 "aliases": [ 00:21:13.239 "7365f653-726f-44c6-84e4-96ddabf21b1f" 00:21:13.239 ], 00:21:13.239 "product_name": "NVMe disk", 00:21:13.239 "block_size": 512, 00:21:13.239 "num_blocks": 2097152, 00:21:13.239 "uuid": "7365f653-726f-44c6-84e4-96ddabf21b1f", 00:21:13.239 "assigned_rate_limits": { 00:21:13.239 "rw_ios_per_sec": 0, 00:21:13.239 "rw_mbytes_per_sec": 0, 00:21:13.239 "r_mbytes_per_sec": 0, 00:21:13.239 "w_mbytes_per_sec": 0 00:21:13.239 }, 00:21:13.239 "claimed": false, 00:21:13.239 "zoned": false, 00:21:13.239 "supported_io_types": { 00:21:13.239 "read": true, 00:21:13.239 "write": true, 00:21:13.239 "unmap": false, 00:21:13.239 "write_zeroes": true, 00:21:13.239 "flush": true, 00:21:13.239 "reset": true, 00:21:13.239 "compare": true, 00:21:13.239 "compare_and_write": true, 00:21:13.239 "abort": true, 00:21:13.239 "nvme_admin": true, 00:21:13.239 "nvme_io": true 00:21:13.239 }, 00:21:13.239 "memory_domains": [ 00:21:13.239 { 00:21:13.239 "dma_device_id": "system", 00:21:13.239 "dma_device_type": 1 00:21:13.239 } 00:21:13.239 ], 00:21:13.239 "driver_specific": { 00:21:13.239 "nvme": [ 00:21:13.239 { 00:21:13.239 "trid": { 00:21:13.239 "trtype": "TCP", 00:21:13.239 "adrfam": "IPv4", 00:21:13.239 "traddr": "10.0.0.2", 00:21:13.239 "trsvcid": "4420", 00:21:13.239 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:13.239 }, 00:21:13.239 "ctrlr_data": { 00:21:13.239 "cntlid": 2, 00:21:13.239 "vendor_id": "0x8086", 00:21:13.239 "model_number": "SPDK bdev Controller", 00:21:13.239 "serial_number": "00000000000000000000", 00:21:13.239 "firmware_revision": "24.05", 00:21:13.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:13.239 "oacs": { 00:21:13.239 "security": 0, 00:21:13.239 "format": 0, 00:21:13.239 "firmware": 0, 00:21:13.239 "ns_manage": 0 00:21:13.239 }, 00:21:13.239 "multi_ctrlr": true, 00:21:13.239 "ana_reporting": false 00:21:13.239 }, 00:21:13.239 "vs": { 00:21:13.239 "nvme_version": "1.3" 00:21:13.239 }, 00:21:13.239 "ns_data": { 00:21:13.239 "id": 1, 00:21:13.239 "can_share": true 00:21:13.239 } 00:21:13.239 } 00:21:13.239 ], 00:21:13.239 "mp_policy": "active_passive" 00:21:13.239 } 00:21:13.239 } 00:21:13.239 ] 00:21:13.239 21:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.239 21:36:36 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.239 21:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.239 21:36:36 -- common/autotest_common.sh@10 -- # set +x 00:21:13.239 21:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.239 21:36:36 -- host/async_init.sh@53 -- # mktemp 00:21:13.239 21:36:36 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.iztgmYmZHS 00:21:13.239 21:36:36 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:13.239 21:36:36 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.iztgmYmZHS 00:21:13.239 21:36:36 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:13.239 21:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.239 21:36:36 -- common/autotest_common.sh@10 -- # set +x 00:21:13.239 21:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.239 21:36:36 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:13.239 21:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.239 21:36:36 -- common/autotest_common.sh@10 -- # set +x 00:21:13.239 [2024-04-24 21:36:36.121061] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:13.239 [2024-04-24 21:36:36.121177] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:13.239 21:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.239 21:36:36 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iztgmYmZHS 00:21:13.239 21:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.239 21:36:36 -- common/autotest_common.sh@10 -- # set +x 00:21:13.499 [2024-04-24 21:36:36.129084] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:13.499 21:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.499 21:36:36 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iztgmYmZHS 00:21:13.499 21:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.499 21:36:36 -- common/autotest_common.sh@10 -- # set +x 00:21:13.499 [2024-04-24 21:36:36.137104] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:13.499 [2024-04-24 21:36:36.137142] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:13.499 nvme0n1 00:21:13.499 21:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.499 21:36:36 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:13.499 21:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.499 21:36:36 -- common/autotest_common.sh@10 -- # set +x 00:21:13.499 [ 00:21:13.499 { 00:21:13.499 "name": "nvme0n1", 00:21:13.499 "aliases": [ 00:21:13.499 "7365f653-726f-44c6-84e4-96ddabf21b1f" 00:21:13.499 ], 00:21:13.499 "product_name": "NVMe disk", 00:21:13.499 "block_size": 512, 00:21:13.499 "num_blocks": 2097152, 00:21:13.499 "uuid": "7365f653-726f-44c6-84e4-96ddabf21b1f", 00:21:13.499 "assigned_rate_limits": { 00:21:13.499 "rw_ios_per_sec": 0, 00:21:13.499 "rw_mbytes_per_sec": 0, 00:21:13.499 "r_mbytes_per_sec": 0, 00:21:13.499 "w_mbytes_per_sec": 0 00:21:13.499 }, 00:21:13.499 "claimed": false, 00:21:13.499 "zoned": false, 00:21:13.499 "supported_io_types": { 00:21:13.499 "read": true, 00:21:13.499 "write": true, 00:21:13.499 "unmap": false, 00:21:13.499 "write_zeroes": true, 00:21:13.499 "flush": true, 00:21:13.499 "reset": true, 00:21:13.499 "compare": true, 00:21:13.499 "compare_and_write": true, 00:21:13.499 "abort": true, 00:21:13.499 "nvme_admin": true, 00:21:13.499 "nvme_io": true 00:21:13.499 }, 00:21:13.499 "memory_domains": [ 00:21:13.499 { 00:21:13.499 "dma_device_id": "system", 00:21:13.499 "dma_device_type": 1 00:21:13.499 } 00:21:13.499 ], 00:21:13.499 "driver_specific": { 00:21:13.499 "nvme": [ 00:21:13.499 { 00:21:13.499 "trid": { 00:21:13.499 "trtype": "TCP", 00:21:13.499 "adrfam": "IPv4", 00:21:13.499 "traddr": "10.0.0.2", 00:21:13.499 "trsvcid": "4421", 00:21:13.499 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:13.499 }, 00:21:13.499 "ctrlr_data": { 00:21:13.499 "cntlid": 3, 00:21:13.499 "vendor_id": "0x8086", 00:21:13.499 "model_number": "SPDK bdev Controller", 00:21:13.499 "serial_number": "00000000000000000000", 00:21:13.499 "firmware_revision": "24.05", 00:21:13.499 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:13.499 "oacs": { 00:21:13.499 "security": 0, 00:21:13.499 "format": 0, 00:21:13.499 "firmware": 0, 00:21:13.499 "ns_manage": 0 00:21:13.499 }, 00:21:13.499 "multi_ctrlr": true, 00:21:13.499 "ana_reporting": false 00:21:13.499 }, 00:21:13.499 "vs": { 00:21:13.499 "nvme_version": "1.3" 00:21:13.499 }, 00:21:13.499 "ns_data": { 00:21:13.499 "id": 1, 00:21:13.499 "can_share": true 00:21:13.499 } 00:21:13.499 } 00:21:13.499 ], 00:21:13.499 "mp_policy": "active_passive" 00:21:13.499 } 00:21:13.499 } 00:21:13.499 ] 00:21:13.499 21:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.499 21:36:36 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.499 21:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.499 21:36:36 -- common/autotest_common.sh@10 -- # set +x 00:21:13.499 21:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.499 21:36:36 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.iztgmYmZHS 00:21:13.499 21:36:36 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:13.499 21:36:36 -- host/async_init.sh@78 -- # nvmftestfini 00:21:13.499 21:36:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:13.499 21:36:36 -- nvmf/common.sh@117 -- # sync 00:21:13.499 21:36:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:13.499 21:36:36 -- nvmf/common.sh@120 -- # set +e 00:21:13.499 21:36:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:13.499 21:36:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:13.499 rmmod nvme_tcp 00:21:13.499 rmmod nvme_fabrics 00:21:13.499 rmmod nvme_keyring 00:21:13.499 21:36:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:13.499 21:36:36 -- nvmf/common.sh@124 -- # set -e 00:21:13.499 21:36:36 -- nvmf/common.sh@125 -- # return 0 00:21:13.499 21:36:36 -- nvmf/common.sh@478 -- # '[' -n 2925209 ']' 00:21:13.499 21:36:36 -- nvmf/common.sh@479 -- # killprocess 2925209 00:21:13.499 21:36:36 -- common/autotest_common.sh@936 -- # '[' -z 2925209 ']' 00:21:13.499 21:36:36 -- common/autotest_common.sh@940 -- # kill -0 2925209 00:21:13.499 21:36:36 -- common/autotest_common.sh@941 -- # uname 00:21:13.499 21:36:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:13.499 21:36:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2925209 00:21:13.499 21:36:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:13.499 21:36:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:13.499 21:36:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2925209' 00:21:13.499 killing process with pid 2925209 00:21:13.499 21:36:36 -- common/autotest_common.sh@955 -- # kill 2925209 00:21:13.499 [2024-04-24 21:36:36.350566] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:13.499 [2024-04-24 21:36:36.350592] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:13.499 21:36:36 -- common/autotest_common.sh@960 -- # wait 2925209 00:21:13.759 21:36:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:13.759 21:36:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:13.759 21:36:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:13.759 21:36:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:13.759 21:36:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:13.759 21:36:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.759 21:36:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.759 21:36:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.300 21:36:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:16.300 00:21:16.300 real 0m10.101s 00:21:16.300 user 0m3.418s 00:21:16.300 sys 0m4.981s 00:21:16.300 21:36:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:16.300 21:36:38 -- common/autotest_common.sh@10 -- # set +x 00:21:16.300 ************************************ 00:21:16.300 END TEST nvmf_async_init 00:21:16.300 ************************************ 00:21:16.300 21:36:38 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:16.300 21:36:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:16.300 21:36:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:16.300 21:36:38 -- common/autotest_common.sh@10 -- # set +x 00:21:16.300 ************************************ 00:21:16.300 START TEST dma 00:21:16.300 ************************************ 00:21:16.300 21:36:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:16.300 * Looking for test storage... 00:21:16.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:16.300 21:36:38 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.300 21:36:38 -- nvmf/common.sh@7 -- # uname -s 00:21:16.300 21:36:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.300 21:36:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.300 21:36:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.300 21:36:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.300 21:36:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.300 21:36:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.300 21:36:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.300 21:36:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.300 21:36:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.300 21:36:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.300 21:36:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:16.300 21:36:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:16.300 21:36:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.300 21:36:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.300 21:36:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.300 21:36:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.300 21:36:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.300 21:36:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.300 21:36:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.300 21:36:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.300 21:36:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.300 21:36:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.300 21:36:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.300 21:36:38 -- paths/export.sh@5 -- # export PATH 00:21:16.300 21:36:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.300 21:36:38 -- nvmf/common.sh@47 -- # : 0 00:21:16.300 21:36:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:16.300 21:36:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:16.300 21:36:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.300 21:36:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.300 21:36:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.300 21:36:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:16.300 21:36:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:16.300 21:36:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:16.300 21:36:38 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:16.300 21:36:38 -- host/dma.sh@13 -- # exit 0 00:21:16.300 00:21:16.300 real 0m0.137s 00:21:16.300 user 0m0.058s 00:21:16.300 sys 0m0.089s 00:21:16.300 21:36:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:16.300 21:36:38 -- common/autotest_common.sh@10 -- # set +x 00:21:16.300 ************************************ 00:21:16.301 END TEST dma 00:21:16.301 ************************************ 00:21:16.301 21:36:38 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:16.301 21:36:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:16.301 21:36:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:16.301 21:36:38 -- common/autotest_common.sh@10 -- # set +x 00:21:16.301 ************************************ 00:21:16.301 START TEST nvmf_identify 00:21:16.301 ************************************ 00:21:16.301 21:36:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:16.301 * Looking for test storage... 00:21:16.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:16.301 21:36:39 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.301 21:36:39 -- nvmf/common.sh@7 -- # uname -s 00:21:16.301 21:36:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.301 21:36:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.301 21:36:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.301 21:36:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.301 21:36:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.301 21:36:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.301 21:36:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.301 21:36:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.301 21:36:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.301 21:36:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.559 21:36:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:16.559 21:36:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:16.559 21:36:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.559 21:36:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.559 21:36:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.559 21:36:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.559 21:36:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.559 21:36:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.559 21:36:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.559 21:36:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.559 21:36:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.559 21:36:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.559 21:36:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.559 21:36:39 -- paths/export.sh@5 -- # export PATH 00:21:16.559 21:36:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.559 21:36:39 -- nvmf/common.sh@47 -- # : 0 00:21:16.559 21:36:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:16.559 21:36:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:16.559 21:36:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.559 21:36:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.559 21:36:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.559 21:36:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:16.559 21:36:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:16.560 21:36:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:16.560 21:36:39 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:16.560 21:36:39 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:16.560 21:36:39 -- host/identify.sh@14 -- # nvmftestinit 00:21:16.560 21:36:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:16.560 21:36:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.560 21:36:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:16.560 21:36:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:16.560 21:36:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:16.560 21:36:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.560 21:36:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.560 21:36:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.560 21:36:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:16.560 21:36:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:16.560 21:36:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:16.560 21:36:39 -- common/autotest_common.sh@10 -- # set +x 00:21:23.128 21:36:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:23.128 21:36:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:23.128 21:36:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:23.128 21:36:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:23.128 21:36:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:23.128 21:36:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:23.128 21:36:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:23.128 21:36:45 -- nvmf/common.sh@295 -- # net_devs=() 00:21:23.128 21:36:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:23.128 21:36:45 -- nvmf/common.sh@296 -- # e810=() 00:21:23.129 21:36:45 -- nvmf/common.sh@296 -- # local -ga e810 00:21:23.129 21:36:45 -- nvmf/common.sh@297 -- # x722=() 00:21:23.129 21:36:45 -- nvmf/common.sh@297 -- # local -ga x722 00:21:23.129 21:36:45 -- nvmf/common.sh@298 -- # mlx=() 00:21:23.129 21:36:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:23.129 21:36:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.129 21:36:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.129 21:36:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.129 21:36:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.129 21:36:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.129 21:36:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.129 21:36:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.129 21:36:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.129 21:36:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.129 21:36:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.129 21:36:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.129 21:36:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:23.129 21:36:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:23.129 21:36:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:23.129 21:36:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:23.129 21:36:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:23.129 21:36:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:23.129 21:36:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:23.129 21:36:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:23.129 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:23.129 21:36:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:23.129 21:36:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:23.129 21:36:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.129 21:36:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.129 21:36:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:23.129 21:36:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:23.129 21:36:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:23.129 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:23.129 21:36:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:23.129 21:36:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:23.129 21:36:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.129 21:36:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.129 21:36:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:23.129 21:36:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:23.129 21:36:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:23.129 21:36:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:23.129 21:36:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:23.129 21:36:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.129 21:36:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:23.129 21:36:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.129 21:36:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:23.129 Found net devices under 0000:af:00.0: cvl_0_0 00:21:23.129 21:36:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.129 21:36:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:23.129 21:36:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.129 21:36:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:23.129 21:36:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.129 21:36:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:23.129 Found net devices under 0000:af:00.1: cvl_0_1 00:21:23.129 21:36:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.129 21:36:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:23.129 21:36:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:23.129 21:36:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:23.129 21:36:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:23.129 21:36:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:23.129 21:36:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.129 21:36:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.129 21:36:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.129 21:36:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:23.129 21:36:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.129 21:36:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.129 21:36:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:23.129 21:36:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.129 21:36:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.129 21:36:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:23.129 21:36:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:23.129 21:36:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.129 21:36:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.129 21:36:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.129 21:36:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.129 21:36:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:23.129 21:36:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.129 21:36:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.129 21:36:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.129 21:36:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:23.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:21:23.129 00:21:23.129 --- 10.0.0.2 ping statistics --- 00:21:23.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.129 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:21:23.129 21:36:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:21:23.129 00:21:23.129 --- 10.0.0.1 ping statistics --- 00:21:23.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.129 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:21:23.129 21:36:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.129 21:36:45 -- nvmf/common.sh@411 -- # return 0 00:21:23.129 21:36:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:23.129 21:36:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.129 21:36:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:23.129 21:36:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:23.129 21:36:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.129 21:36:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:23.129 21:36:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:23.129 21:36:45 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:23.129 21:36:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:23.129 21:36:45 -- common/autotest_common.sh@10 -- # set +x 00:21:23.129 21:36:45 -- host/identify.sh@19 -- # nvmfpid=2929186 00:21:23.129 21:36:45 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:23.129 21:36:45 -- host/identify.sh@23 -- # waitforlisten 2929186 00:21:23.129 21:36:45 -- common/autotest_common.sh@817 -- # '[' -z 2929186 ']' 00:21:23.129 21:36:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.129 21:36:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:23.129 21:36:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.129 21:36:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:23.129 21:36:45 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:23.129 21:36:45 -- common/autotest_common.sh@10 -- # set +x 00:21:23.129 [2024-04-24 21:36:45.560327] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:21:23.129 [2024-04-24 21:36:45.560373] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.129 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.129 [2024-04-24 21:36:45.634116] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:23.129 [2024-04-24 21:36:45.708136] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.129 [2024-04-24 21:36:45.708171] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.129 [2024-04-24 21:36:45.708181] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.129 [2024-04-24 21:36:45.708190] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.129 [2024-04-24 21:36:45.708198] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.129 [2024-04-24 21:36:45.708241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.129 [2024-04-24 21:36:45.708337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.129 [2024-04-24 21:36:45.708420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:23.129 [2024-04-24 21:36:45.708422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.698 21:36:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:23.698 21:36:46 -- common/autotest_common.sh@850 -- # return 0 00:21:23.698 21:36:46 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:23.698 21:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.698 21:36:46 -- common/autotest_common.sh@10 -- # set +x 00:21:23.698 [2024-04-24 21:36:46.370172] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.698 21:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.698 21:36:46 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:23.698 21:36:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:23.698 21:36:46 -- common/autotest_common.sh@10 -- # set +x 00:21:23.698 21:36:46 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:23.698 21:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.698 21:36:46 -- common/autotest_common.sh@10 -- # set +x 00:21:23.698 Malloc0 00:21:23.698 21:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.698 21:36:46 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.698 21:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.698 21:36:46 -- common/autotest_common.sh@10 -- # set +x 00:21:23.698 21:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.698 21:36:46 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:23.698 21:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.698 21:36:46 -- common/autotest_common.sh@10 -- # set +x 00:21:23.698 21:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.698 21:36:46 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.698 21:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.698 21:36:46 -- common/autotest_common.sh@10 -- # set +x 00:21:23.698 [2024-04-24 21:36:46.473051] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.698 21:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.698 21:36:46 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:23.698 21:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.698 21:36:46 -- common/autotest_common.sh@10 -- # set +x 00:21:23.698 21:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.698 21:36:46 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:23.698 21:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.698 21:36:46 -- common/autotest_common.sh@10 -- # set +x 00:21:23.698 [2024-04-24 21:36:46.488857] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:23.698 [ 00:21:23.698 { 00:21:23.698 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:23.698 "subtype": "Discovery", 00:21:23.698 "listen_addresses": [ 00:21:23.698 { 00:21:23.698 "transport": "TCP", 00:21:23.698 "trtype": "TCP", 00:21:23.698 "adrfam": "IPv4", 00:21:23.698 "traddr": "10.0.0.2", 00:21:23.698 "trsvcid": "4420" 00:21:23.698 } 00:21:23.698 ], 00:21:23.698 "allow_any_host": true, 00:21:23.698 "hosts": [] 00:21:23.698 }, 00:21:23.698 { 00:21:23.698 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.698 "subtype": "NVMe", 00:21:23.698 "listen_addresses": [ 00:21:23.698 { 00:21:23.698 "transport": "TCP", 00:21:23.698 "trtype": "TCP", 00:21:23.698 "adrfam": "IPv4", 00:21:23.698 "traddr": "10.0.0.2", 00:21:23.698 "trsvcid": "4420" 00:21:23.698 } 00:21:23.698 ], 00:21:23.698 "allow_any_host": true, 00:21:23.698 "hosts": [], 00:21:23.698 "serial_number": "SPDK00000000000001", 00:21:23.698 "model_number": "SPDK bdev Controller", 00:21:23.698 "max_namespaces": 32, 00:21:23.698 "min_cntlid": 1, 00:21:23.699 "max_cntlid": 65519, 00:21:23.699 "namespaces": [ 00:21:23.699 { 00:21:23.699 "nsid": 1, 00:21:23.699 "bdev_name": "Malloc0", 00:21:23.699 "name": "Malloc0", 00:21:23.699 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:23.699 "eui64": "ABCDEF0123456789", 00:21:23.699 "uuid": "3bca1156-1a8e-4bb8-a8c3-31330d206d4e" 00:21:23.699 } 00:21:23.699 ] 00:21:23.699 } 00:21:23.699 ] 00:21:23.699 21:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.699 21:36:46 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:23.699 [2024-04-24 21:36:46.531344] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:21:23.699 [2024-04-24 21:36:46.531381] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2929465 ] 00:21:23.699 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.699 [2024-04-24 21:36:46.562808] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:23.699 [2024-04-24 21:36:46.562854] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:23.699 [2024-04-24 21:36:46.562860] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:23.699 [2024-04-24 21:36:46.562872] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:23.699 [2024-04-24 21:36:46.562881] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:23.699 [2024-04-24 21:36:46.563469] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:23.699 [2024-04-24 21:36:46.563507] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb1fd20 0 00:21:23.699 [2024-04-24 21:36:46.569463] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:23.699 [2024-04-24 21:36:46.569481] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:23.699 [2024-04-24 21:36:46.569487] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:23.699 [2024-04-24 21:36:46.569492] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:23.699 [2024-04-24 21:36:46.569534] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.569541] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.569546] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb1fd20) 00:21:23.699 [2024-04-24 21:36:46.569562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:23.699 [2024-04-24 21:36:46.569581] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89a00, cid 0, qid 0 00:21:23.699 [2024-04-24 21:36:46.577461] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.699 [2024-04-24 21:36:46.577476] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.699 [2024-04-24 21:36:46.577481] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.577486] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89a00) on tqpair=0xb1fd20 00:21:23.699 [2024-04-24 21:36:46.577500] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:23.699 [2024-04-24 21:36:46.577508] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:23.699 [2024-04-24 21:36:46.577515] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:23.699 [2024-04-24 21:36:46.577530] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.577535] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.577539] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb1fd20) 00:21:23.699 [2024-04-24 21:36:46.577547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.699 [2024-04-24 21:36:46.577561] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89a00, cid 0, qid 0 00:21:23.699 [2024-04-24 21:36:46.577805] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.699 [2024-04-24 21:36:46.577815] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.699 [2024-04-24 21:36:46.577820] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.577825] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89a00) on tqpair=0xb1fd20 00:21:23.699 [2024-04-24 21:36:46.577832] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:23.699 [2024-04-24 21:36:46.577842] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:23.699 [2024-04-24 21:36:46.577851] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.577855] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.577860] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb1fd20) 00:21:23.699 [2024-04-24 21:36:46.577868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.699 [2024-04-24 21:36:46.577882] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89a00, cid 0, qid 0 00:21:23.699 [2024-04-24 21:36:46.578016] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.699 [2024-04-24 21:36:46.578024] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.699 [2024-04-24 21:36:46.578028] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.578033] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89a00) on tqpair=0xb1fd20 00:21:23.699 [2024-04-24 21:36:46.578039] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:23.699 [2024-04-24 21:36:46.578049] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:23.699 [2024-04-24 21:36:46.578057] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.578062] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.578066] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb1fd20) 00:21:23.699 [2024-04-24 21:36:46.578074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.699 [2024-04-24 21:36:46.578087] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89a00, cid 0, qid 0 00:21:23.699 [2024-04-24 21:36:46.578223] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.699 [2024-04-24 21:36:46.578230] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.699 [2024-04-24 21:36:46.578234] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.578239] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89a00) on tqpair=0xb1fd20 00:21:23.699 [2024-04-24 21:36:46.578245] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:23.699 [2024-04-24 21:36:46.578256] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.578261] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.578266] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb1fd20) 00:21:23.699 [2024-04-24 21:36:46.578273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.699 [2024-04-24 21:36:46.578286] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89a00, cid 0, qid 0 00:21:23.699 [2024-04-24 21:36:46.578418] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.699 [2024-04-24 21:36:46.578425] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.699 [2024-04-24 21:36:46.578430] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.578435] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89a00) on tqpair=0xb1fd20 00:21:23.699 [2024-04-24 21:36:46.578440] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:23.699 [2024-04-24 21:36:46.578447] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:23.699 [2024-04-24 21:36:46.578463] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:23.699 [2024-04-24 21:36:46.578570] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:23.699 [2024-04-24 21:36:46.578577] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:23.699 [2024-04-24 21:36:46.578587] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.578592] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.578596] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb1fd20) 00:21:23.699 [2024-04-24 21:36:46.578607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.699 [2024-04-24 21:36:46.578620] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89a00, cid 0, qid 0 00:21:23.699 [2024-04-24 21:36:46.578830] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.699 [2024-04-24 21:36:46.578837] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.699 [2024-04-24 21:36:46.578841] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.578846] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89a00) on tqpair=0xb1fd20 00:21:23.699 [2024-04-24 21:36:46.578853] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:23.699 [2024-04-24 21:36:46.578864] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.578869] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.578874] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb1fd20) 00:21:23.699 [2024-04-24 21:36:46.578881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.699 [2024-04-24 21:36:46.578894] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89a00, cid 0, qid 0 00:21:23.699 [2024-04-24 21:36:46.579048] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.699 [2024-04-24 21:36:46.579058] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.699 [2024-04-24 21:36:46.579062] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.699 [2024-04-24 21:36:46.579067] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89a00) on tqpair=0xb1fd20 00:21:23.700 [2024-04-24 21:36:46.579073] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:23.700 [2024-04-24 21:36:46.579080] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:23.700 [2024-04-24 21:36:46.579091] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:23.700 [2024-04-24 21:36:46.579107] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:23.700 [2024-04-24 21:36:46.579119] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.700 [2024-04-24 21:36:46.579124] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb1fd20) 00:21:23.700 [2024-04-24 21:36:46.579133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.700 [2024-04-24 21:36:46.579147] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89a00, cid 0, qid 0 00:21:23.700 [2024-04-24 21:36:46.579326] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:23.700 [2024-04-24 21:36:46.579339] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:23.700 [2024-04-24 21:36:46.579347] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:23.700 [2024-04-24 21:36:46.579355] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb1fd20): datao=0, datal=4096, cccid=0 00:21:23.700 [2024-04-24 21:36:46.579363] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb89a00) on tqpair(0xb1fd20): expected_datao=0, payload_size=4096 00:21:23.700 [2024-04-24 21:36:46.579370] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.700 [2024-04-24 21:36:46.579594] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:23.700 [2024-04-24 21:36:46.579600] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:23.961 [2024-04-24 21:36:46.624461] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.961 [2024-04-24 21:36:46.624477] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.961 [2024-04-24 21:36:46.624486] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.961 [2024-04-24 21:36:46.624491] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89a00) on tqpair=0xb1fd20 00:21:23.961 [2024-04-24 21:36:46.624502] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:23.961 [2024-04-24 21:36:46.624509] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:23.961 [2024-04-24 21:36:46.624515] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:23.962 [2024-04-24 21:36:46.624521] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:23.962 [2024-04-24 21:36:46.624527] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:23.962 [2024-04-24 21:36:46.624533] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:23.962 [2024-04-24 21:36:46.624544] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:23.962 [2024-04-24 21:36:46.624553] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.624559] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.624563] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb1fd20) 00:21:23.962 [2024-04-24 21:36:46.624572] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:23.962 [2024-04-24 21:36:46.624587] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89a00, cid 0, qid 0 00:21:23.962 [2024-04-24 21:36:46.624786] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.962 [2024-04-24 21:36:46.624795] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.962 [2024-04-24 21:36:46.624800] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.624805] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89a00) on tqpair=0xb1fd20 00:21:23.962 [2024-04-24 21:36:46.624814] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.624818] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.624823] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb1fd20) 00:21:23.962 [2024-04-24 21:36:46.624830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.962 [2024-04-24 21:36:46.624838] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.624842] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.624847] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb1fd20) 00:21:23.962 [2024-04-24 21:36:46.624853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.962 [2024-04-24 21:36:46.624860] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.624865] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.624869] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb1fd20) 00:21:23.962 [2024-04-24 21:36:46.624875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.962 [2024-04-24 21:36:46.624882] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.624887] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.624892] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.962 [2024-04-24 21:36:46.624901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.962 [2024-04-24 21:36:46.624908] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:23.962 [2024-04-24 21:36:46.624922] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:23.962 [2024-04-24 21:36:46.624931] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.624936] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb1fd20) 00:21:23.962 [2024-04-24 21:36:46.624943] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.962 [2024-04-24 21:36:46.624958] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89a00, cid 0, qid 0 00:21:23.962 [2024-04-24 21:36:46.624964] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89b60, cid 1, qid 0 00:21:23.962 [2024-04-24 21:36:46.624970] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89cc0, cid 2, qid 0 00:21:23.962 [2024-04-24 21:36:46.624975] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.962 [2024-04-24 21:36:46.624980] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89f80, cid 4, qid 0 00:21:23.962 [2024-04-24 21:36:46.625152] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.962 [2024-04-24 21:36:46.625161] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.962 [2024-04-24 21:36:46.625165] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.625170] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89f80) on tqpair=0xb1fd20 00:21:23.962 [2024-04-24 21:36:46.625176] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:23.962 [2024-04-24 21:36:46.625183] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:23.962 [2024-04-24 21:36:46.625196] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.625201] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb1fd20) 00:21:23.962 [2024-04-24 21:36:46.625209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.962 [2024-04-24 21:36:46.625222] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89f80, cid 4, qid 0 00:21:23.962 [2024-04-24 21:36:46.625439] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:23.962 [2024-04-24 21:36:46.625448] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:23.962 [2024-04-24 21:36:46.625461] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.625466] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb1fd20): datao=0, datal=4096, cccid=4 00:21:23.962 [2024-04-24 21:36:46.625472] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb89f80) on tqpair(0xb1fd20): expected_datao=0, payload_size=4096 00:21:23.962 [2024-04-24 21:36:46.625478] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.625485] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.625490] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.625729] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.962 [2024-04-24 21:36:46.625736] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.962 [2024-04-24 21:36:46.625740] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.625745] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89f80) on tqpair=0xb1fd20 00:21:23.962 [2024-04-24 21:36:46.625760] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:23.962 [2024-04-24 21:36:46.625783] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.625788] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb1fd20) 00:21:23.962 [2024-04-24 21:36:46.625796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.962 [2024-04-24 21:36:46.625804] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.625809] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.625813] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb1fd20) 00:21:23.962 [2024-04-24 21:36:46.625820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:23.962 [2024-04-24 21:36:46.625838] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89f80, cid 4, qid 0 00:21:23.962 [2024-04-24 21:36:46.625845] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb8a0e0, cid 5, qid 0 00:21:23.962 [2024-04-24 21:36:46.626012] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:23.962 [2024-04-24 21:36:46.626020] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:23.962 [2024-04-24 21:36:46.626024] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.626029] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb1fd20): datao=0, datal=1024, cccid=4 00:21:23.962 [2024-04-24 21:36:46.626035] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb89f80) on tqpair(0xb1fd20): expected_datao=0, payload_size=1024 00:21:23.962 [2024-04-24 21:36:46.626041] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.626047] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.626052] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.626058] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.962 [2024-04-24 21:36:46.626065] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.962 [2024-04-24 21:36:46.626069] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.626074] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb8a0e0) on tqpair=0xb1fd20 00:21:23.962 [2024-04-24 21:36:46.666718] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.962 [2024-04-24 21:36:46.666732] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.962 [2024-04-24 21:36:46.666737] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.666742] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89f80) on tqpair=0xb1fd20 00:21:23.962 [2024-04-24 21:36:46.666756] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.666761] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb1fd20) 00:21:23.962 [2024-04-24 21:36:46.666769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.962 [2024-04-24 21:36:46.666789] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89f80, cid 4, qid 0 00:21:23.962 [2024-04-24 21:36:46.667011] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:23.962 [2024-04-24 21:36:46.667019] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:23.962 [2024-04-24 21:36:46.667024] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.667029] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb1fd20): datao=0, datal=3072, cccid=4 00:21:23.962 [2024-04-24 21:36:46.667035] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb89f80) on tqpair(0xb1fd20): expected_datao=0, payload_size=3072 00:21:23.962 [2024-04-24 21:36:46.667041] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.667051] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.667056] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:23.962 [2024-04-24 21:36:46.707661] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.963 [2024-04-24 21:36:46.707676] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.963 [2024-04-24 21:36:46.707681] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.963 [2024-04-24 21:36:46.707686] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89f80) on tqpair=0xb1fd20 00:21:23.963 [2024-04-24 21:36:46.707698] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.963 [2024-04-24 21:36:46.707703] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb1fd20) 00:21:23.963 [2024-04-24 21:36:46.707711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.963 [2024-04-24 21:36:46.707730] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89f80, cid 4, qid 0 00:21:23.963 [2024-04-24 21:36:46.707867] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:23.963 [2024-04-24 21:36:46.707875] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:23.963 [2024-04-24 21:36:46.707879] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:23.963 [2024-04-24 21:36:46.707884] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb1fd20): datao=0, datal=8, cccid=4 00:21:23.963 [2024-04-24 21:36:46.707890] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb89f80) on tqpair(0xb1fd20): expected_datao=0, payload_size=8 00:21:23.963 [2024-04-24 21:36:46.707896] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.963 [2024-04-24 21:36:46.707903] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:23.963 [2024-04-24 21:36:46.707907] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:23.963 [2024-04-24 21:36:46.752461] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.963 [2024-04-24 21:36:46.752471] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.963 [2024-04-24 21:36:46.752476] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.963 [2024-04-24 21:36:46.752481] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89f80) on tqpair=0xb1fd20 00:21:23.963 ===================================================== 00:21:23.963 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:23.963 ===================================================== 00:21:23.963 Controller Capabilities/Features 00:21:23.963 ================================ 00:21:23.963 Vendor ID: 0000 00:21:23.963 Subsystem Vendor ID: 0000 00:21:23.963 Serial Number: .................... 00:21:23.963 Model Number: ........................................ 00:21:23.963 Firmware Version: 24.05 00:21:23.963 Recommended Arb Burst: 0 00:21:23.963 IEEE OUI Identifier: 00 00 00 00:21:23.963 Multi-path I/O 00:21:23.963 May have multiple subsystem ports: No 00:21:23.963 May have multiple controllers: No 00:21:23.963 Associated with SR-IOV VF: No 00:21:23.963 Max Data Transfer Size: 131072 00:21:23.963 Max Number of Namespaces: 0 00:21:23.963 Max Number of I/O Queues: 1024 00:21:23.963 NVMe Specification Version (VS): 1.3 00:21:23.963 NVMe Specification Version (Identify): 1.3 00:21:23.963 Maximum Queue Entries: 128 00:21:23.963 Contiguous Queues Required: Yes 00:21:23.963 Arbitration Mechanisms Supported 00:21:23.963 Weighted Round Robin: Not Supported 00:21:23.963 Vendor Specific: Not Supported 00:21:23.963 Reset Timeout: 15000 ms 00:21:23.963 Doorbell Stride: 4 bytes 00:21:23.963 NVM Subsystem Reset: Not Supported 00:21:23.963 Command Sets Supported 00:21:23.963 NVM Command Set: Supported 00:21:23.963 Boot Partition: Not Supported 00:21:23.963 Memory Page Size Minimum: 4096 bytes 00:21:23.963 Memory Page Size Maximum: 4096 bytes 00:21:23.963 Persistent Memory Region: Not Supported 00:21:23.963 Optional Asynchronous Events Supported 00:21:23.963 Namespace Attribute Notices: Not Supported 00:21:23.963 Firmware Activation Notices: Not Supported 00:21:23.963 ANA Change Notices: Not Supported 00:21:23.963 PLE Aggregate Log Change Notices: Not Supported 00:21:23.963 LBA Status Info Alert Notices: Not Supported 00:21:23.963 EGE Aggregate Log Change Notices: Not Supported 00:21:23.963 Normal NVM Subsystem Shutdown event: Not Supported 00:21:23.963 Zone Descriptor Change Notices: Not Supported 00:21:23.963 Discovery Log Change Notices: Supported 00:21:23.963 Controller Attributes 00:21:23.963 128-bit Host Identifier: Not Supported 00:21:23.963 Non-Operational Permissive Mode: Not Supported 00:21:23.963 NVM Sets: Not Supported 00:21:23.963 Read Recovery Levels: Not Supported 00:21:23.963 Endurance Groups: Not Supported 00:21:23.963 Predictable Latency Mode: Not Supported 00:21:23.963 Traffic Based Keep ALive: Not Supported 00:21:23.963 Namespace Granularity: Not Supported 00:21:23.963 SQ Associations: Not Supported 00:21:23.963 UUID List: Not Supported 00:21:23.963 Multi-Domain Subsystem: Not Supported 00:21:23.963 Fixed Capacity Management: Not Supported 00:21:23.963 Variable Capacity Management: Not Supported 00:21:23.963 Delete Endurance Group: Not Supported 00:21:23.963 Delete NVM Set: Not Supported 00:21:23.963 Extended LBA Formats Supported: Not Supported 00:21:23.963 Flexible Data Placement Supported: Not Supported 00:21:23.963 00:21:23.963 Controller Memory Buffer Support 00:21:23.963 ================================ 00:21:23.963 Supported: No 00:21:23.963 00:21:23.963 Persistent Memory Region Support 00:21:23.963 ================================ 00:21:23.963 Supported: No 00:21:23.963 00:21:23.963 Admin Command Set Attributes 00:21:23.963 ============================ 00:21:23.963 Security Send/Receive: Not Supported 00:21:23.963 Format NVM: Not Supported 00:21:23.963 Firmware Activate/Download: Not Supported 00:21:23.963 Namespace Management: Not Supported 00:21:23.963 Device Self-Test: Not Supported 00:21:23.963 Directives: Not Supported 00:21:23.963 NVMe-MI: Not Supported 00:21:23.963 Virtualization Management: Not Supported 00:21:23.963 Doorbell Buffer Config: Not Supported 00:21:23.963 Get LBA Status Capability: Not Supported 00:21:23.963 Command & Feature Lockdown Capability: Not Supported 00:21:23.963 Abort Command Limit: 1 00:21:23.963 Async Event Request Limit: 4 00:21:23.963 Number of Firmware Slots: N/A 00:21:23.963 Firmware Slot 1 Read-Only: N/A 00:21:23.963 Firmware Activation Without Reset: N/A 00:21:23.963 Multiple Update Detection Support: N/A 00:21:23.963 Firmware Update Granularity: No Information Provided 00:21:23.963 Per-Namespace SMART Log: No 00:21:23.963 Asymmetric Namespace Access Log Page: Not Supported 00:21:23.963 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:23.963 Command Effects Log Page: Not Supported 00:21:23.963 Get Log Page Extended Data: Supported 00:21:23.963 Telemetry Log Pages: Not Supported 00:21:23.963 Persistent Event Log Pages: Not Supported 00:21:23.963 Supported Log Pages Log Page: May Support 00:21:23.963 Commands Supported & Effects Log Page: Not Supported 00:21:23.963 Feature Identifiers & Effects Log Page:May Support 00:21:23.963 NVMe-MI Commands & Effects Log Page: May Support 00:21:23.963 Data Area 4 for Telemetry Log: Not Supported 00:21:23.963 Error Log Page Entries Supported: 128 00:21:23.963 Keep Alive: Not Supported 00:21:23.963 00:21:23.963 NVM Command Set Attributes 00:21:23.963 ========================== 00:21:23.963 Submission Queue Entry Size 00:21:23.963 Max: 1 00:21:23.963 Min: 1 00:21:23.963 Completion Queue Entry Size 00:21:23.963 Max: 1 00:21:23.963 Min: 1 00:21:23.963 Number of Namespaces: 0 00:21:23.963 Compare Command: Not Supported 00:21:23.963 Write Uncorrectable Command: Not Supported 00:21:23.963 Dataset Management Command: Not Supported 00:21:23.963 Write Zeroes Command: Not Supported 00:21:23.963 Set Features Save Field: Not Supported 00:21:23.963 Reservations: Not Supported 00:21:23.963 Timestamp: Not Supported 00:21:23.963 Copy: Not Supported 00:21:23.963 Volatile Write Cache: Not Present 00:21:23.963 Atomic Write Unit (Normal): 1 00:21:23.963 Atomic Write Unit (PFail): 1 00:21:23.963 Atomic Compare & Write Unit: 1 00:21:23.963 Fused Compare & Write: Supported 00:21:23.963 Scatter-Gather List 00:21:23.963 SGL Command Set: Supported 00:21:23.963 SGL Keyed: Supported 00:21:23.963 SGL Bit Bucket Descriptor: Not Supported 00:21:23.963 SGL Metadata Pointer: Not Supported 00:21:23.963 Oversized SGL: Not Supported 00:21:23.963 SGL Metadata Address: Not Supported 00:21:23.963 SGL Offset: Supported 00:21:23.963 Transport SGL Data Block: Not Supported 00:21:23.963 Replay Protected Memory Block: Not Supported 00:21:23.963 00:21:23.963 Firmware Slot Information 00:21:23.963 ========================= 00:21:23.963 Active slot: 0 00:21:23.963 00:21:23.963 00:21:23.963 Error Log 00:21:23.963 ========= 00:21:23.963 00:21:23.963 Active Namespaces 00:21:23.963 ================= 00:21:23.963 Discovery Log Page 00:21:23.963 ================== 00:21:23.963 Generation Counter: 2 00:21:23.963 Number of Records: 2 00:21:23.963 Record Format: 0 00:21:23.963 00:21:23.963 Discovery Log Entry 0 00:21:23.963 ---------------------- 00:21:23.963 Transport Type: 3 (TCP) 00:21:23.964 Address Family: 1 (IPv4) 00:21:23.964 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:23.964 Entry Flags: 00:21:23.964 Duplicate Returned Information: 1 00:21:23.964 Explicit Persistent Connection Support for Discovery: 1 00:21:23.964 Transport Requirements: 00:21:23.964 Secure Channel: Not Required 00:21:23.964 Port ID: 0 (0x0000) 00:21:23.964 Controller ID: 65535 (0xffff) 00:21:23.964 Admin Max SQ Size: 128 00:21:23.964 Transport Service Identifier: 4420 00:21:23.964 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:23.964 Transport Address: 10.0.0.2 00:21:23.964 Discovery Log Entry 1 00:21:23.964 ---------------------- 00:21:23.964 Transport Type: 3 (TCP) 00:21:23.964 Address Family: 1 (IPv4) 00:21:23.964 Subsystem Type: 2 (NVM Subsystem) 00:21:23.964 Entry Flags: 00:21:23.964 Duplicate Returned Information: 0 00:21:23.964 Explicit Persistent Connection Support for Discovery: 0 00:21:23.964 Transport Requirements: 00:21:23.964 Secure Channel: Not Required 00:21:23.964 Port ID: 0 (0x0000) 00:21:23.964 Controller ID: 65535 (0xffff) 00:21:23.964 Admin Max SQ Size: 128 00:21:23.964 Transport Service Identifier: 4420 00:21:23.964 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:23.964 Transport Address: 10.0.0.2 [2024-04-24 21:36:46.752567] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:23.964 [2024-04-24 21:36:46.752582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.964 [2024-04-24 21:36:46.752590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.964 [2024-04-24 21:36:46.752597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.964 [2024-04-24 21:36:46.752604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:23.964 [2024-04-24 21:36:46.752613] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.752618] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.752622] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.964 [2024-04-24 21:36:46.752630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.964 [2024-04-24 21:36:46.752645] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.964 [2024-04-24 21:36:46.752795] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.964 [2024-04-24 21:36:46.752804] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.964 [2024-04-24 21:36:46.752809] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.752813] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.964 [2024-04-24 21:36:46.752824] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.752829] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.752834] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.964 [2024-04-24 21:36:46.752841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.964 [2024-04-24 21:36:46.752859] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.964 [2024-04-24 21:36:46.753022] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.964 [2024-04-24 21:36:46.753030] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.964 [2024-04-24 21:36:46.753034] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.753039] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.964 [2024-04-24 21:36:46.753045] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:23.964 [2024-04-24 21:36:46.753051] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:23.964 [2024-04-24 21:36:46.753062] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.753067] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.753072] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.964 [2024-04-24 21:36:46.753079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.964 [2024-04-24 21:36:46.753091] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.964 [2024-04-24 21:36:46.753225] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.964 [2024-04-24 21:36:46.753232] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.964 [2024-04-24 21:36:46.753237] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.753242] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.964 [2024-04-24 21:36:46.753253] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.753258] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.753263] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.964 [2024-04-24 21:36:46.753270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.964 [2024-04-24 21:36:46.753282] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.964 [2024-04-24 21:36:46.753415] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.964 [2024-04-24 21:36:46.753422] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.964 [2024-04-24 21:36:46.753427] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.753431] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.964 [2024-04-24 21:36:46.753441] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.753446] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.753458] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.964 [2024-04-24 21:36:46.753465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.964 [2024-04-24 21:36:46.753479] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.964 [2024-04-24 21:36:46.753612] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.964 [2024-04-24 21:36:46.753622] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.964 [2024-04-24 21:36:46.753627] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.753632] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.964 [2024-04-24 21:36:46.753643] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.753648] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.753653] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.964 [2024-04-24 21:36:46.753660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.964 [2024-04-24 21:36:46.753672] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.964 [2024-04-24 21:36:46.753803] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.964 [2024-04-24 21:36:46.753810] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.964 [2024-04-24 21:36:46.753815] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.753820] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.964 [2024-04-24 21:36:46.753830] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.753835] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.753840] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.964 [2024-04-24 21:36:46.753847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.964 [2024-04-24 21:36:46.753859] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.964 [2024-04-24 21:36:46.753995] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.964 [2024-04-24 21:36:46.754002] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.964 [2024-04-24 21:36:46.754007] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.754011] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.964 [2024-04-24 21:36:46.754022] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.754027] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.754032] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.964 [2024-04-24 21:36:46.754039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.964 [2024-04-24 21:36:46.754051] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.964 [2024-04-24 21:36:46.754184] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.964 [2024-04-24 21:36:46.754192] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.964 [2024-04-24 21:36:46.754196] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.754201] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.964 [2024-04-24 21:36:46.754212] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.754216] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.964 [2024-04-24 21:36:46.754221] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.965 [2024-04-24 21:36:46.754228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.965 [2024-04-24 21:36:46.754240] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.965 [2024-04-24 21:36:46.754376] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.965 [2024-04-24 21:36:46.754383] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.965 [2024-04-24 21:36:46.754390] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.754395] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.965 [2024-04-24 21:36:46.754406] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.754411] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.754415] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.965 [2024-04-24 21:36:46.754422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.965 [2024-04-24 21:36:46.754435] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.965 [2024-04-24 21:36:46.754569] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.965 [2024-04-24 21:36:46.754578] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.965 [2024-04-24 21:36:46.754582] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.754587] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.965 [2024-04-24 21:36:46.754598] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.754603] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.754607] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.965 [2024-04-24 21:36:46.754615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.965 [2024-04-24 21:36:46.754627] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.965 [2024-04-24 21:36:46.754762] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.965 [2024-04-24 21:36:46.754769] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.965 [2024-04-24 21:36:46.754773] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.754778] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.965 [2024-04-24 21:36:46.754789] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.754794] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.754798] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.965 [2024-04-24 21:36:46.754805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.965 [2024-04-24 21:36:46.754818] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.965 [2024-04-24 21:36:46.754955] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.965 [2024-04-24 21:36:46.754962] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.965 [2024-04-24 21:36:46.754966] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.754971] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.965 [2024-04-24 21:36:46.754982] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.754987] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.754991] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.965 [2024-04-24 21:36:46.754998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.965 [2024-04-24 21:36:46.755010] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.965 [2024-04-24 21:36:46.755147] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.965 [2024-04-24 21:36:46.755154] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.965 [2024-04-24 21:36:46.755159] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.755164] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.965 [2024-04-24 21:36:46.755177] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.755182] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.755187] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.965 [2024-04-24 21:36:46.755194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.965 [2024-04-24 21:36:46.755206] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.965 [2024-04-24 21:36:46.755340] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.965 [2024-04-24 21:36:46.755347] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.965 [2024-04-24 21:36:46.755352] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.755356] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.965 [2024-04-24 21:36:46.755367] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.755372] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.755377] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.965 [2024-04-24 21:36:46.755384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.965 [2024-04-24 21:36:46.755396] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.965 [2024-04-24 21:36:46.755534] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.965 [2024-04-24 21:36:46.755541] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.965 [2024-04-24 21:36:46.755546] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.755551] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.965 [2024-04-24 21:36:46.755562] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.755567] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.755571] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.965 [2024-04-24 21:36:46.755579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.965 [2024-04-24 21:36:46.755591] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.965 [2024-04-24 21:36:46.755728] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.965 [2024-04-24 21:36:46.755735] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.965 [2024-04-24 21:36:46.755740] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.755745] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.965 [2024-04-24 21:36:46.755755] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.755760] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.755765] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.965 [2024-04-24 21:36:46.755772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.965 [2024-04-24 21:36:46.755784] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.965 [2024-04-24 21:36:46.755921] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.965 [2024-04-24 21:36:46.755928] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.965 [2024-04-24 21:36:46.755932] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.755937] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.965 [2024-04-24 21:36:46.755950] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.755955] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.755960] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.965 [2024-04-24 21:36:46.755967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.965 [2024-04-24 21:36:46.755979] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.965 [2024-04-24 21:36:46.756113] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.965 [2024-04-24 21:36:46.756120] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.965 [2024-04-24 21:36:46.756124] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.756129] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.965 [2024-04-24 21:36:46.756140] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.756145] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.756150] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.965 [2024-04-24 21:36:46.756157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.965 [2024-04-24 21:36:46.756169] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.965 [2024-04-24 21:36:46.756306] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.965 [2024-04-24 21:36:46.756313] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.965 [2024-04-24 21:36:46.756317] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.756322] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.965 [2024-04-24 21:36:46.756333] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.756338] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.756342] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.965 [2024-04-24 21:36:46.756349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.965 [2024-04-24 21:36:46.756361] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.965 [2024-04-24 21:36:46.760458] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.965 [2024-04-24 21:36:46.760469] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.965 [2024-04-24 21:36:46.760474] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.965 [2024-04-24 21:36:46.760479] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.966 [2024-04-24 21:36:46.760491] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:23.966 [2024-04-24 21:36:46.760496] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:23.966 [2024-04-24 21:36:46.760501] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb1fd20) 00:21:23.966 [2024-04-24 21:36:46.760508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.966 [2024-04-24 21:36:46.760523] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb89e20, cid 3, qid 0 00:21:23.966 [2024-04-24 21:36:46.760743] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:23.966 [2024-04-24 21:36:46.760750] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:23.966 [2024-04-24 21:36:46.760755] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:23.966 [2024-04-24 21:36:46.760760] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb89e20) on tqpair=0xb1fd20 00:21:23.966 [2024-04-24 21:36:46.760769] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:21:23.966 00:21:23.966 21:36:46 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:23.966 [2024-04-24 21:36:46.803213] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:21:23.966 [2024-04-24 21:36:46.803252] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2929471 ] 00:21:23.966 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.966 [2024-04-24 21:36:46.835505] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:23.966 [2024-04-24 21:36:46.835544] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:23.966 [2024-04-24 21:36:46.835550] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:23.966 [2024-04-24 21:36:46.835561] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:23.966 [2024-04-24 21:36:46.835569] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:23.966 [2024-04-24 21:36:46.836133] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:23.966 [2024-04-24 21:36:46.836157] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x108ed20 0 00:21:24.228 [2024-04-24 21:36:46.850469] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:24.228 [2024-04-24 21:36:46.850487] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:24.228 [2024-04-24 21:36:46.850492] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:24.228 [2024-04-24 21:36:46.850497] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:24.228 [2024-04-24 21:36:46.850533] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.228 [2024-04-24 21:36:46.850539] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.228 [2024-04-24 21:36:46.850544] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x108ed20) 00:21:24.228 [2024-04-24 21:36:46.850556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:24.228 [2024-04-24 21:36:46.850576] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8a00, cid 0, qid 0 00:21:24.228 [2024-04-24 21:36:46.858462] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.228 [2024-04-24 21:36:46.858476] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.228 [2024-04-24 21:36:46.858481] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.228 [2024-04-24 21:36:46.858486] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8a00) on tqpair=0x108ed20 00:21:24.228 [2024-04-24 21:36:46.858501] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:24.228 [2024-04-24 21:36:46.858508] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:24.228 [2024-04-24 21:36:46.858515] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:24.228 [2024-04-24 21:36:46.858530] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.228 [2024-04-24 21:36:46.858535] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.228 [2024-04-24 21:36:46.858540] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x108ed20) 00:21:24.228 [2024-04-24 21:36:46.858549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.228 [2024-04-24 21:36:46.858567] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8a00, cid 0, qid 0 00:21:24.228 [2024-04-24 21:36:46.858805] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.228 [2024-04-24 21:36:46.858814] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.228 [2024-04-24 21:36:46.858818] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.228 [2024-04-24 21:36:46.858823] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8a00) on tqpair=0x108ed20 00:21:24.228 [2024-04-24 21:36:46.858831] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:24.228 [2024-04-24 21:36:46.858841] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:24.228 [2024-04-24 21:36:46.858850] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.228 [2024-04-24 21:36:46.858855] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.228 [2024-04-24 21:36:46.858860] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x108ed20) 00:21:24.228 [2024-04-24 21:36:46.858868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.228 [2024-04-24 21:36:46.858881] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8a00, cid 0, qid 0 00:21:24.228 [2024-04-24 21:36:46.859016] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.228 [2024-04-24 21:36:46.859024] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.228 [2024-04-24 21:36:46.859028] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.859033] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8a00) on tqpair=0x108ed20 00:21:24.229 [2024-04-24 21:36:46.859040] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:24.229 [2024-04-24 21:36:46.859051] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:24.229 [2024-04-24 21:36:46.859059] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.859064] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.859068] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x108ed20) 00:21:24.229 [2024-04-24 21:36:46.859076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.229 [2024-04-24 21:36:46.859089] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8a00, cid 0, qid 0 00:21:24.229 [2024-04-24 21:36:46.859221] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.229 [2024-04-24 21:36:46.859228] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.229 [2024-04-24 21:36:46.859233] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.859238] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8a00) on tqpair=0x108ed20 00:21:24.229 [2024-04-24 21:36:46.859245] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:24.229 [2024-04-24 21:36:46.859257] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.859262] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.859267] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x108ed20) 00:21:24.229 [2024-04-24 21:36:46.859274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.229 [2024-04-24 21:36:46.859287] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8a00, cid 0, qid 0 00:21:24.229 [2024-04-24 21:36:46.859415] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.229 [2024-04-24 21:36:46.859422] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.229 [2024-04-24 21:36:46.859429] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.859434] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8a00) on tqpair=0x108ed20 00:21:24.229 [2024-04-24 21:36:46.859441] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:24.229 [2024-04-24 21:36:46.859447] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:24.229 [2024-04-24 21:36:46.859464] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:24.229 [2024-04-24 21:36:46.859571] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:24.229 [2024-04-24 21:36:46.859576] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:24.229 [2024-04-24 21:36:46.859585] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.859590] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.859595] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x108ed20) 00:21:24.229 [2024-04-24 21:36:46.859603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.229 [2024-04-24 21:36:46.859617] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8a00, cid 0, qid 0 00:21:24.229 [2024-04-24 21:36:46.859752] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.229 [2024-04-24 21:36:46.859759] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.229 [2024-04-24 21:36:46.859764] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.859769] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8a00) on tqpair=0x108ed20 00:21:24.229 [2024-04-24 21:36:46.859776] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:24.229 [2024-04-24 21:36:46.859788] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.859793] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.859798] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x108ed20) 00:21:24.229 [2024-04-24 21:36:46.859805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.229 [2024-04-24 21:36:46.859818] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8a00, cid 0, qid 0 00:21:24.229 [2024-04-24 21:36:46.859949] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.229 [2024-04-24 21:36:46.859956] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.229 [2024-04-24 21:36:46.859961] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.859966] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8a00) on tqpair=0x108ed20 00:21:24.229 [2024-04-24 21:36:46.859973] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:24.229 [2024-04-24 21:36:46.859979] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:24.229 [2024-04-24 21:36:46.859988] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:24.229 [2024-04-24 21:36:46.859998] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:24.229 [2024-04-24 21:36:46.860010] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.860015] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x108ed20) 00:21:24.229 [2024-04-24 21:36:46.860025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.229 [2024-04-24 21:36:46.860038] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8a00, cid 0, qid 0 00:21:24.229 [2024-04-24 21:36:46.860210] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.229 [2024-04-24 21:36:46.860218] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.229 [2024-04-24 21:36:46.860223] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.860228] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x108ed20): datao=0, datal=4096, cccid=0 00:21:24.229 [2024-04-24 21:36:46.860234] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10f8a00) on tqpair(0x108ed20): expected_datao=0, payload_size=4096 00:21:24.229 [2024-04-24 21:36:46.860240] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.860247] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.860252] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.860496] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.229 [2024-04-24 21:36:46.860503] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.229 [2024-04-24 21:36:46.860508] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.860513] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8a00) on tqpair=0x108ed20 00:21:24.229 [2024-04-24 21:36:46.860522] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:24.229 [2024-04-24 21:36:46.860529] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:24.229 [2024-04-24 21:36:46.860535] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:24.229 [2024-04-24 21:36:46.860540] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:24.229 [2024-04-24 21:36:46.860546] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:24.229 [2024-04-24 21:36:46.860552] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:24.229 [2024-04-24 21:36:46.860563] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:24.229 [2024-04-24 21:36:46.860571] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.860576] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.860581] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x108ed20) 00:21:24.229 [2024-04-24 21:36:46.860589] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.229 [2024-04-24 21:36:46.860603] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8a00, cid 0, qid 0 00:21:24.229 [2024-04-24 21:36:46.860743] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.229 [2024-04-24 21:36:46.860751] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.229 [2024-04-24 21:36:46.860755] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.860760] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8a00) on tqpair=0x108ed20 00:21:24.229 [2024-04-24 21:36:46.860769] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.860774] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.860779] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x108ed20) 00:21:24.229 [2024-04-24 21:36:46.860786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.229 [2024-04-24 21:36:46.860796] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.860801] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.860805] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x108ed20) 00:21:24.229 [2024-04-24 21:36:46.860812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.229 [2024-04-24 21:36:46.860819] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.860824] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.860829] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x108ed20) 00:21:24.229 [2024-04-24 21:36:46.860835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.229 [2024-04-24 21:36:46.860842] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.860847] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.229 [2024-04-24 21:36:46.860852] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x108ed20) 00:21:24.229 [2024-04-24 21:36:46.860858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.230 [2024-04-24 21:36:46.860864] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:24.230 [2024-04-24 21:36:46.860878] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:24.230 [2024-04-24 21:36:46.860886] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.860891] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x108ed20) 00:21:24.230 [2024-04-24 21:36:46.860898] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.230 [2024-04-24 21:36:46.860912] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8a00, cid 0, qid 0 00:21:24.230 [2024-04-24 21:36:46.860919] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8b60, cid 1, qid 0 00:21:24.230 [2024-04-24 21:36:46.860924] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8cc0, cid 2, qid 0 00:21:24.230 [2024-04-24 21:36:46.860930] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8e20, cid 3, qid 0 00:21:24.230 [2024-04-24 21:36:46.860935] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8f80, cid 4, qid 0 00:21:24.230 [2024-04-24 21:36:46.861100] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.230 [2024-04-24 21:36:46.861107] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.230 [2024-04-24 21:36:46.861112] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.861117] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8f80) on tqpair=0x108ed20 00:21:24.230 [2024-04-24 21:36:46.861124] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:24.230 [2024-04-24 21:36:46.861131] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:24.230 [2024-04-24 21:36:46.861144] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:24.230 [2024-04-24 21:36:46.861152] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:24.230 [2024-04-24 21:36:46.861160] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.861165] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.861172] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x108ed20) 00:21:24.230 [2024-04-24 21:36:46.861179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.230 [2024-04-24 21:36:46.861192] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8f80, cid 4, qid 0 00:21:24.230 [2024-04-24 21:36:46.861327] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.230 [2024-04-24 21:36:46.861334] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.230 [2024-04-24 21:36:46.861339] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.861344] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8f80) on tqpair=0x108ed20 00:21:24.230 [2024-04-24 21:36:46.861387] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:24.230 [2024-04-24 21:36:46.861399] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:24.230 [2024-04-24 21:36:46.861408] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.861413] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x108ed20) 00:21:24.230 [2024-04-24 21:36:46.861420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.230 [2024-04-24 21:36:46.861433] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8f80, cid 4, qid 0 00:21:24.230 [2024-04-24 21:36:46.861601] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.230 [2024-04-24 21:36:46.861610] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.230 [2024-04-24 21:36:46.861615] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.861620] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x108ed20): datao=0, datal=4096, cccid=4 00:21:24.230 [2024-04-24 21:36:46.861626] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10f8f80) on tqpair(0x108ed20): expected_datao=0, payload_size=4096 00:21:24.230 [2024-04-24 21:36:46.861632] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.861639] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.861644] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.861876] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.230 [2024-04-24 21:36:46.861883] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.230 [2024-04-24 21:36:46.861887] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.861892] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8f80) on tqpair=0x108ed20 00:21:24.230 [2024-04-24 21:36:46.861904] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:24.230 [2024-04-24 21:36:46.861920] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:24.230 [2024-04-24 21:36:46.861931] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:24.230 [2024-04-24 21:36:46.861940] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.861945] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x108ed20) 00:21:24.230 [2024-04-24 21:36:46.861952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.230 [2024-04-24 21:36:46.861966] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8f80, cid 4, qid 0 00:21:24.230 [2024-04-24 21:36:46.862122] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.230 [2024-04-24 21:36:46.862130] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.230 [2024-04-24 21:36:46.862138] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.862142] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x108ed20): datao=0, datal=4096, cccid=4 00:21:24.230 [2024-04-24 21:36:46.862149] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10f8f80) on tqpair(0x108ed20): expected_datao=0, payload_size=4096 00:21:24.230 [2024-04-24 21:36:46.862154] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.862162] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.862167] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.862404] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.230 [2024-04-24 21:36:46.862410] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.230 [2024-04-24 21:36:46.862415] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.862420] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8f80) on tqpair=0x108ed20 00:21:24.230 [2024-04-24 21:36:46.862435] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:24.230 [2024-04-24 21:36:46.862446] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:24.230 [2024-04-24 21:36:46.862463] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.862469] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x108ed20) 00:21:24.230 [2024-04-24 21:36:46.862476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.230 [2024-04-24 21:36:46.862490] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8f80, cid 4, qid 0 00:21:24.230 [2024-04-24 21:36:46.862640] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.230 [2024-04-24 21:36:46.862647] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.230 [2024-04-24 21:36:46.862652] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.862657] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x108ed20): datao=0, datal=4096, cccid=4 00:21:24.230 [2024-04-24 21:36:46.862663] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10f8f80) on tqpair(0x108ed20): expected_datao=0, payload_size=4096 00:21:24.230 [2024-04-24 21:36:46.862669] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.862676] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.862681] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.862921] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.230 [2024-04-24 21:36:46.862928] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.230 [2024-04-24 21:36:46.862932] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.862937] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8f80) on tqpair=0x108ed20 00:21:24.230 [2024-04-24 21:36:46.862947] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:24.230 [2024-04-24 21:36:46.862958] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:24.230 [2024-04-24 21:36:46.862970] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:24.230 [2024-04-24 21:36:46.862978] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:24.230 [2024-04-24 21:36:46.862984] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:24.230 [2024-04-24 21:36:46.862993] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:24.230 [2024-04-24 21:36:46.862999] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:24.230 [2024-04-24 21:36:46.863005] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:24.230 [2024-04-24 21:36:46.863021] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.863026] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x108ed20) 00:21:24.230 [2024-04-24 21:36:46.863033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.230 [2024-04-24 21:36:46.863041] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.863046] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.230 [2024-04-24 21:36:46.863051] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x108ed20) 00:21:24.231 [2024-04-24 21:36:46.863057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.231 [2024-04-24 21:36:46.863073] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8f80, cid 4, qid 0 00:21:24.231 [2024-04-24 21:36:46.863079] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f90e0, cid 5, qid 0 00:21:24.231 [2024-04-24 21:36:46.863230] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.231 [2024-04-24 21:36:46.863238] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.231 [2024-04-24 21:36:46.863242] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.863247] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8f80) on tqpair=0x108ed20 00:21:24.231 [2024-04-24 21:36:46.863256] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.231 [2024-04-24 21:36:46.863262] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.231 [2024-04-24 21:36:46.863267] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.863272] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f90e0) on tqpair=0x108ed20 00:21:24.231 [2024-04-24 21:36:46.863284] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.863289] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x108ed20) 00:21:24.231 [2024-04-24 21:36:46.863297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.231 [2024-04-24 21:36:46.863310] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f90e0, cid 5, qid 0 00:21:24.231 [2024-04-24 21:36:46.863442] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.231 [2024-04-24 21:36:46.863456] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.231 [2024-04-24 21:36:46.863461] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.863466] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f90e0) on tqpair=0x108ed20 00:21:24.231 [2024-04-24 21:36:46.863479] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.863484] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x108ed20) 00:21:24.231 [2024-04-24 21:36:46.863492] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.231 [2024-04-24 21:36:46.863505] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f90e0, cid 5, qid 0 00:21:24.231 [2024-04-24 21:36:46.863642] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.231 [2024-04-24 21:36:46.863649] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.231 [2024-04-24 21:36:46.863653] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.863661] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f90e0) on tqpair=0x108ed20 00:21:24.231 [2024-04-24 21:36:46.863674] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.863679] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x108ed20) 00:21:24.231 [2024-04-24 21:36:46.863686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.231 [2024-04-24 21:36:46.863699] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f90e0, cid 5, qid 0 00:21:24.231 [2024-04-24 21:36:46.863835] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.231 [2024-04-24 21:36:46.863842] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.231 [2024-04-24 21:36:46.863847] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.863852] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f90e0) on tqpair=0x108ed20 00:21:24.231 [2024-04-24 21:36:46.863866] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.863872] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x108ed20) 00:21:24.231 [2024-04-24 21:36:46.863879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.231 [2024-04-24 21:36:46.863887] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.863892] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x108ed20) 00:21:24.231 [2024-04-24 21:36:46.863899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.231 [2024-04-24 21:36:46.863907] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.863911] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x108ed20) 00:21:24.231 [2024-04-24 21:36:46.863918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.231 [2024-04-24 21:36:46.863926] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.863931] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x108ed20) 00:21:24.231 [2024-04-24 21:36:46.863938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.231 [2024-04-24 21:36:46.863951] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f90e0, cid 5, qid 0 00:21:24.231 [2024-04-24 21:36:46.863957] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8f80, cid 4, qid 0 00:21:24.231 [2024-04-24 21:36:46.863963] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f9240, cid 6, qid 0 00:21:24.231 [2024-04-24 21:36:46.863968] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f93a0, cid 7, qid 0 00:21:24.231 [2024-04-24 21:36:46.864247] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.231 [2024-04-24 21:36:46.864256] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.231 [2024-04-24 21:36:46.864261] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864265] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x108ed20): datao=0, datal=8192, cccid=5 00:21:24.231 [2024-04-24 21:36:46.864271] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10f90e0) on tqpair(0x108ed20): expected_datao=0, payload_size=8192 00:21:24.231 [2024-04-24 21:36:46.864277] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864285] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864290] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864299] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.231 [2024-04-24 21:36:46.864305] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.231 [2024-04-24 21:36:46.864310] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864314] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x108ed20): datao=0, datal=512, cccid=4 00:21:24.231 [2024-04-24 21:36:46.864320] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10f8f80) on tqpair(0x108ed20): expected_datao=0, payload_size=512 00:21:24.231 [2024-04-24 21:36:46.864326] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864333] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864337] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864344] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.231 [2024-04-24 21:36:46.864350] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.231 [2024-04-24 21:36:46.864355] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864359] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x108ed20): datao=0, datal=512, cccid=6 00:21:24.231 [2024-04-24 21:36:46.864365] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10f9240) on tqpair(0x108ed20): expected_datao=0, payload_size=512 00:21:24.231 [2024-04-24 21:36:46.864371] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864378] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864382] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864389] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.231 [2024-04-24 21:36:46.864395] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.231 [2024-04-24 21:36:46.864399] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864404] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x108ed20): datao=0, datal=4096, cccid=7 00:21:24.231 [2024-04-24 21:36:46.864410] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10f93a0) on tqpair(0x108ed20): expected_datao=0, payload_size=4096 00:21:24.231 [2024-04-24 21:36:46.864416] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864423] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864427] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864534] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.231 [2024-04-24 21:36:46.864541] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.231 [2024-04-24 21:36:46.864545] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864551] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f90e0) on tqpair=0x108ed20 00:21:24.231 [2024-04-24 21:36:46.864566] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.231 [2024-04-24 21:36:46.864573] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.231 [2024-04-24 21:36:46.864577] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864582] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8f80) on tqpair=0x108ed20 00:21:24.231 [2024-04-24 21:36:46.864593] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.231 [2024-04-24 21:36:46.864599] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.231 [2024-04-24 21:36:46.864604] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864608] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f9240) on tqpair=0x108ed20 00:21:24.231 [2024-04-24 21:36:46.864617] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.231 [2024-04-24 21:36:46.864624] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.231 [2024-04-24 21:36:46.864628] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.231 [2024-04-24 21:36:46.864635] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f93a0) on tqpair=0x108ed20 00:21:24.231 ===================================================== 00:21:24.231 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:24.231 ===================================================== 00:21:24.231 Controller Capabilities/Features 00:21:24.231 ================================ 00:21:24.232 Vendor ID: 8086 00:21:24.232 Subsystem Vendor ID: 8086 00:21:24.232 Serial Number: SPDK00000000000001 00:21:24.232 Model Number: SPDK bdev Controller 00:21:24.232 Firmware Version: 24.05 00:21:24.232 Recommended Arb Burst: 6 00:21:24.232 IEEE OUI Identifier: e4 d2 5c 00:21:24.232 Multi-path I/O 00:21:24.232 May have multiple subsystem ports: Yes 00:21:24.232 May have multiple controllers: Yes 00:21:24.232 Associated with SR-IOV VF: No 00:21:24.232 Max Data Transfer Size: 131072 00:21:24.232 Max Number of Namespaces: 32 00:21:24.232 Max Number of I/O Queues: 127 00:21:24.232 NVMe Specification Version (VS): 1.3 00:21:24.232 NVMe Specification Version (Identify): 1.3 00:21:24.232 Maximum Queue Entries: 128 00:21:24.232 Contiguous Queues Required: Yes 00:21:24.232 Arbitration Mechanisms Supported 00:21:24.232 Weighted Round Robin: Not Supported 00:21:24.232 Vendor Specific: Not Supported 00:21:24.232 Reset Timeout: 15000 ms 00:21:24.232 Doorbell Stride: 4 bytes 00:21:24.232 NVM Subsystem Reset: Not Supported 00:21:24.232 Command Sets Supported 00:21:24.232 NVM Command Set: Supported 00:21:24.232 Boot Partition: Not Supported 00:21:24.232 Memory Page Size Minimum: 4096 bytes 00:21:24.232 Memory Page Size Maximum: 4096 bytes 00:21:24.232 Persistent Memory Region: Not Supported 00:21:24.232 Optional Asynchronous Events Supported 00:21:24.232 Namespace Attribute Notices: Supported 00:21:24.232 Firmware Activation Notices: Not Supported 00:21:24.232 ANA Change Notices: Not Supported 00:21:24.232 PLE Aggregate Log Change Notices: Not Supported 00:21:24.232 LBA Status Info Alert Notices: Not Supported 00:21:24.232 EGE Aggregate Log Change Notices: Not Supported 00:21:24.232 Normal NVM Subsystem Shutdown event: Not Supported 00:21:24.232 Zone Descriptor Change Notices: Not Supported 00:21:24.232 Discovery Log Change Notices: Not Supported 00:21:24.232 Controller Attributes 00:21:24.232 128-bit Host Identifier: Supported 00:21:24.232 Non-Operational Permissive Mode: Not Supported 00:21:24.232 NVM Sets: Not Supported 00:21:24.232 Read Recovery Levels: Not Supported 00:21:24.232 Endurance Groups: Not Supported 00:21:24.232 Predictable Latency Mode: Not Supported 00:21:24.232 Traffic Based Keep ALive: Not Supported 00:21:24.232 Namespace Granularity: Not Supported 00:21:24.232 SQ Associations: Not Supported 00:21:24.232 UUID List: Not Supported 00:21:24.232 Multi-Domain Subsystem: Not Supported 00:21:24.232 Fixed Capacity Management: Not Supported 00:21:24.232 Variable Capacity Management: Not Supported 00:21:24.232 Delete Endurance Group: Not Supported 00:21:24.232 Delete NVM Set: Not Supported 00:21:24.232 Extended LBA Formats Supported: Not Supported 00:21:24.232 Flexible Data Placement Supported: Not Supported 00:21:24.232 00:21:24.232 Controller Memory Buffer Support 00:21:24.232 ================================ 00:21:24.232 Supported: No 00:21:24.232 00:21:24.232 Persistent Memory Region Support 00:21:24.232 ================================ 00:21:24.232 Supported: No 00:21:24.232 00:21:24.232 Admin Command Set Attributes 00:21:24.232 ============================ 00:21:24.232 Security Send/Receive: Not Supported 00:21:24.232 Format NVM: Not Supported 00:21:24.232 Firmware Activate/Download: Not Supported 00:21:24.232 Namespace Management: Not Supported 00:21:24.232 Device Self-Test: Not Supported 00:21:24.232 Directives: Not Supported 00:21:24.232 NVMe-MI: Not Supported 00:21:24.232 Virtualization Management: Not Supported 00:21:24.232 Doorbell Buffer Config: Not Supported 00:21:24.232 Get LBA Status Capability: Not Supported 00:21:24.232 Command & Feature Lockdown Capability: Not Supported 00:21:24.232 Abort Command Limit: 4 00:21:24.232 Async Event Request Limit: 4 00:21:24.232 Number of Firmware Slots: N/A 00:21:24.232 Firmware Slot 1 Read-Only: N/A 00:21:24.232 Firmware Activation Without Reset: N/A 00:21:24.232 Multiple Update Detection Support: N/A 00:21:24.232 Firmware Update Granularity: No Information Provided 00:21:24.232 Per-Namespace SMART Log: No 00:21:24.232 Asymmetric Namespace Access Log Page: Not Supported 00:21:24.232 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:24.232 Command Effects Log Page: Supported 00:21:24.232 Get Log Page Extended Data: Supported 00:21:24.232 Telemetry Log Pages: Not Supported 00:21:24.232 Persistent Event Log Pages: Not Supported 00:21:24.232 Supported Log Pages Log Page: May Support 00:21:24.232 Commands Supported & Effects Log Page: Not Supported 00:21:24.232 Feature Identifiers & Effects Log Page:May Support 00:21:24.232 NVMe-MI Commands & Effects Log Page: May Support 00:21:24.232 Data Area 4 for Telemetry Log: Not Supported 00:21:24.232 Error Log Page Entries Supported: 128 00:21:24.232 Keep Alive: Supported 00:21:24.232 Keep Alive Granularity: 10000 ms 00:21:24.232 00:21:24.232 NVM Command Set Attributes 00:21:24.232 ========================== 00:21:24.232 Submission Queue Entry Size 00:21:24.232 Max: 64 00:21:24.232 Min: 64 00:21:24.232 Completion Queue Entry Size 00:21:24.232 Max: 16 00:21:24.232 Min: 16 00:21:24.232 Number of Namespaces: 32 00:21:24.232 Compare Command: Supported 00:21:24.232 Write Uncorrectable Command: Not Supported 00:21:24.232 Dataset Management Command: Supported 00:21:24.232 Write Zeroes Command: Supported 00:21:24.232 Set Features Save Field: Not Supported 00:21:24.232 Reservations: Supported 00:21:24.232 Timestamp: Not Supported 00:21:24.232 Copy: Supported 00:21:24.232 Volatile Write Cache: Present 00:21:24.232 Atomic Write Unit (Normal): 1 00:21:24.232 Atomic Write Unit (PFail): 1 00:21:24.232 Atomic Compare & Write Unit: 1 00:21:24.232 Fused Compare & Write: Supported 00:21:24.232 Scatter-Gather List 00:21:24.232 SGL Command Set: Supported 00:21:24.232 SGL Keyed: Supported 00:21:24.232 SGL Bit Bucket Descriptor: Not Supported 00:21:24.232 SGL Metadata Pointer: Not Supported 00:21:24.232 Oversized SGL: Not Supported 00:21:24.232 SGL Metadata Address: Not Supported 00:21:24.232 SGL Offset: Supported 00:21:24.232 Transport SGL Data Block: Not Supported 00:21:24.232 Replay Protected Memory Block: Not Supported 00:21:24.232 00:21:24.232 Firmware Slot Information 00:21:24.232 ========================= 00:21:24.232 Active slot: 1 00:21:24.232 Slot 1 Firmware Revision: 24.05 00:21:24.232 00:21:24.232 00:21:24.232 Commands Supported and Effects 00:21:24.232 ============================== 00:21:24.232 Admin Commands 00:21:24.232 -------------- 00:21:24.232 Get Log Page (02h): Supported 00:21:24.232 Identify (06h): Supported 00:21:24.232 Abort (08h): Supported 00:21:24.232 Set Features (09h): Supported 00:21:24.232 Get Features (0Ah): Supported 00:21:24.232 Asynchronous Event Request (0Ch): Supported 00:21:24.232 Keep Alive (18h): Supported 00:21:24.232 I/O Commands 00:21:24.232 ------------ 00:21:24.232 Flush (00h): Supported LBA-Change 00:21:24.232 Write (01h): Supported LBA-Change 00:21:24.232 Read (02h): Supported 00:21:24.232 Compare (05h): Supported 00:21:24.232 Write Zeroes (08h): Supported LBA-Change 00:21:24.232 Dataset Management (09h): Supported LBA-Change 00:21:24.232 Copy (19h): Supported LBA-Change 00:21:24.232 Unknown (79h): Supported LBA-Change 00:21:24.232 Unknown (7Ah): Supported 00:21:24.232 00:21:24.232 Error Log 00:21:24.232 ========= 00:21:24.232 00:21:24.232 Arbitration 00:21:24.232 =========== 00:21:24.232 Arbitration Burst: 1 00:21:24.232 00:21:24.232 Power Management 00:21:24.232 ================ 00:21:24.232 Number of Power States: 1 00:21:24.232 Current Power State: Power State #0 00:21:24.232 Power State #0: 00:21:24.232 Max Power: 0.00 W 00:21:24.232 Non-Operational State: Operational 00:21:24.232 Entry Latency: Not Reported 00:21:24.232 Exit Latency: Not Reported 00:21:24.232 Relative Read Throughput: 0 00:21:24.232 Relative Read Latency: 0 00:21:24.232 Relative Write Throughput: 0 00:21:24.232 Relative Write Latency: 0 00:21:24.232 Idle Power: Not Reported 00:21:24.232 Active Power: Not Reported 00:21:24.232 Non-Operational Permissive Mode: Not Supported 00:21:24.232 00:21:24.232 Health Information 00:21:24.232 ================== 00:21:24.232 Critical Warnings: 00:21:24.232 Available Spare Space: OK 00:21:24.232 Temperature: OK 00:21:24.232 Device Reliability: OK 00:21:24.232 Read Only: No 00:21:24.232 Volatile Memory Backup: OK 00:21:24.233 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:24.233 Temperature Threshold: [2024-04-24 21:36:46.864726] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.233 [2024-04-24 21:36:46.864732] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x108ed20) 00:21:24.233 [2024-04-24 21:36:46.864740] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.233 [2024-04-24 21:36:46.864754] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f93a0, cid 7, qid 0 00:21:24.233 [2024-04-24 21:36:46.864901] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.233 [2024-04-24 21:36:46.864909] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.233 [2024-04-24 21:36:46.864913] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.233 [2024-04-24 21:36:46.864918] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f93a0) on tqpair=0x108ed20 00:21:24.233 [2024-04-24 21:36:46.864949] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:24.233 [2024-04-24 21:36:46.864963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.233 [2024-04-24 21:36:46.864970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.233 [2024-04-24 21:36:46.864978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.233 [2024-04-24 21:36:46.864985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.233 [2024-04-24 21:36:46.864994] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.233 [2024-04-24 21:36:46.864999] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.233 [2024-04-24 21:36:46.865004] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x108ed20) 00:21:24.233 [2024-04-24 21:36:46.865012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.233 [2024-04-24 21:36:46.865026] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8e20, cid 3, qid 0 00:21:24.233 [2024-04-24 21:36:46.865165] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.233 [2024-04-24 21:36:46.865172] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.233 [2024-04-24 21:36:46.865177] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.233 [2024-04-24 21:36:46.865182] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8e20) on tqpair=0x108ed20 00:21:24.233 [2024-04-24 21:36:46.865191] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.233 [2024-04-24 21:36:46.865196] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.233 [2024-04-24 21:36:46.865201] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x108ed20) 00:21:24.233 [2024-04-24 21:36:46.865208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.233 [2024-04-24 21:36:46.865225] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8e20, cid 3, qid 0 00:21:24.233 [2024-04-24 21:36:46.865372] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.233 [2024-04-24 21:36:46.865380] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.233 [2024-04-24 21:36:46.865384] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.233 [2024-04-24 21:36:46.865389] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8e20) on tqpair=0x108ed20 00:21:24.233 [2024-04-24 21:36:46.865396] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:24.233 [2024-04-24 21:36:46.865402] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:24.233 [2024-04-24 21:36:46.865416] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.233 [2024-04-24 21:36:46.865422] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.233 [2024-04-24 21:36:46.865427] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x108ed20) 00:21:24.233 [2024-04-24 21:36:46.865434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.233 [2024-04-24 21:36:46.865447] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8e20, cid 3, qid 0 00:21:24.233 [2024-04-24 21:36:46.869463] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.233 [2024-04-24 21:36:46.869470] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.233 [2024-04-24 21:36:46.869475] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.233 [2024-04-24 21:36:46.869480] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8e20) on tqpair=0x108ed20 00:21:24.233 [2024-04-24 21:36:46.869492] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.233 [2024-04-24 21:36:46.869498] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.233 [2024-04-24 21:36:46.869502] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x108ed20) 00:21:24.233 [2024-04-24 21:36:46.869510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.233 [2024-04-24 21:36:46.869523] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8e20, cid 3, qid 0 00:21:24.233 [2024-04-24 21:36:46.869744] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.233 [2024-04-24 21:36:46.869752] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.233 [2024-04-24 21:36:46.869757] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.233 [2024-04-24 21:36:46.869761] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8e20) on tqpair=0x108ed20 00:21:24.233 [2024-04-24 21:36:46.869771] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:21:24.233 0 Kelvin (-273 Celsius) 00:21:24.233 Available Spare: 0% 00:21:24.233 Available Spare Threshold: 0% 00:21:24.233 Life Percentage Used: 0% 00:21:24.233 Data Units Read: 0 00:21:24.233 Data Units Written: 0 00:21:24.233 Host Read Commands: 0 00:21:24.233 Host Write Commands: 0 00:21:24.233 Controller Busy Time: 0 minutes 00:21:24.233 Power Cycles: 0 00:21:24.233 Power On Hours: 0 hours 00:21:24.233 Unsafe Shutdowns: 0 00:21:24.233 Unrecoverable Media Errors: 0 00:21:24.233 Lifetime Error Log Entries: 0 00:21:24.233 Warning Temperature Time: 0 minutes 00:21:24.233 Critical Temperature Time: 0 minutes 00:21:24.233 00:21:24.233 Number of Queues 00:21:24.233 ================ 00:21:24.233 Number of I/O Submission Queues: 127 00:21:24.233 Number of I/O Completion Queues: 127 00:21:24.233 00:21:24.233 Active Namespaces 00:21:24.233 ================= 00:21:24.233 Namespace ID:1 00:21:24.233 Error Recovery Timeout: Unlimited 00:21:24.233 Command Set Identifier: NVM (00h) 00:21:24.233 Deallocate: Supported 00:21:24.233 Deallocated/Unwritten Error: Not Supported 00:21:24.233 Deallocated Read Value: Unknown 00:21:24.233 Deallocate in Write Zeroes: Not Supported 00:21:24.233 Deallocated Guard Field: 0xFFFF 00:21:24.233 Flush: Supported 00:21:24.233 Reservation: Supported 00:21:24.233 Namespace Sharing Capabilities: Multiple Controllers 00:21:24.233 Size (in LBAs): 131072 (0GiB) 00:21:24.233 Capacity (in LBAs): 131072 (0GiB) 00:21:24.233 Utilization (in LBAs): 131072 (0GiB) 00:21:24.233 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:24.233 EUI64: ABCDEF0123456789 00:21:24.233 UUID: 3bca1156-1a8e-4bb8-a8c3-31330d206d4e 00:21:24.233 Thin Provisioning: Not Supported 00:21:24.233 Per-NS Atomic Units: Yes 00:21:24.233 Atomic Boundary Size (Normal): 0 00:21:24.233 Atomic Boundary Size (PFail): 0 00:21:24.233 Atomic Boundary Offset: 0 00:21:24.233 Maximum Single Source Range Length: 65535 00:21:24.233 Maximum Copy Length: 65535 00:21:24.233 Maximum Source Range Count: 1 00:21:24.233 NGUID/EUI64 Never Reused: No 00:21:24.233 Namespace Write Protected: No 00:21:24.233 Number of LBA Formats: 1 00:21:24.233 Current LBA Format: LBA Format #00 00:21:24.234 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:24.234 00:21:24.234 21:36:46 -- host/identify.sh@51 -- # sync 00:21:24.234 21:36:46 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:24.234 21:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.234 21:36:46 -- common/autotest_common.sh@10 -- # set +x 00:21:24.234 21:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.234 21:36:46 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:24.234 21:36:46 -- host/identify.sh@56 -- # nvmftestfini 00:21:24.234 21:36:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:24.234 21:36:46 -- nvmf/common.sh@117 -- # sync 00:21:24.234 21:36:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:24.234 21:36:46 -- nvmf/common.sh@120 -- # set +e 00:21:24.234 21:36:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:24.234 21:36:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:24.234 rmmod nvme_tcp 00:21:24.234 rmmod nvme_fabrics 00:21:24.234 rmmod nvme_keyring 00:21:24.234 21:36:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:24.234 21:36:46 -- nvmf/common.sh@124 -- # set -e 00:21:24.234 21:36:46 -- nvmf/common.sh@125 -- # return 0 00:21:24.234 21:36:46 -- nvmf/common.sh@478 -- # '[' -n 2929186 ']' 00:21:24.234 21:36:46 -- nvmf/common.sh@479 -- # killprocess 2929186 00:21:24.234 21:36:46 -- common/autotest_common.sh@936 -- # '[' -z 2929186 ']' 00:21:24.234 21:36:46 -- common/autotest_common.sh@940 -- # kill -0 2929186 00:21:24.234 21:36:46 -- common/autotest_common.sh@941 -- # uname 00:21:24.234 21:36:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:24.234 21:36:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2929186 00:21:24.234 21:36:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:24.234 21:36:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:24.234 21:36:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2929186' 00:21:24.234 killing process with pid 2929186 00:21:24.234 21:36:47 -- common/autotest_common.sh@955 -- # kill 2929186 00:21:24.234 [2024-04-24 21:36:47.006621] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:24.234 21:36:47 -- common/autotest_common.sh@960 -- # wait 2929186 00:21:24.493 21:36:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:24.493 21:36:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:24.493 21:36:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:24.493 21:36:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:24.493 21:36:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:24.493 21:36:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.493 21:36:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:24.493 21:36:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.028 21:36:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:27.028 00:21:27.028 real 0m10.215s 00:21:27.028 user 0m7.577s 00:21:27.028 sys 0m5.390s 00:21:27.028 21:36:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:27.028 21:36:49 -- common/autotest_common.sh@10 -- # set +x 00:21:27.028 ************************************ 00:21:27.028 END TEST nvmf_identify 00:21:27.028 ************************************ 00:21:27.028 21:36:49 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:27.028 21:36:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:27.028 21:36:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:27.028 21:36:49 -- common/autotest_common.sh@10 -- # set +x 00:21:27.028 ************************************ 00:21:27.028 START TEST nvmf_perf 00:21:27.028 ************************************ 00:21:27.028 21:36:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:27.028 * Looking for test storage... 00:21:27.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:27.028 21:36:49 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.028 21:36:49 -- nvmf/common.sh@7 -- # uname -s 00:21:27.028 21:36:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.028 21:36:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.028 21:36:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.028 21:36:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.028 21:36:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.028 21:36:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.028 21:36:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.028 21:36:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.028 21:36:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.028 21:36:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.028 21:36:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:27.028 21:36:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:27.028 21:36:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.028 21:36:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.028 21:36:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.028 21:36:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.028 21:36:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.028 21:36:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.028 21:36:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.028 21:36:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.028 21:36:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.028 21:36:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.028 21:36:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.028 21:36:49 -- paths/export.sh@5 -- # export PATH 00:21:27.028 21:36:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.028 21:36:49 -- nvmf/common.sh@47 -- # : 0 00:21:27.028 21:36:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:27.028 21:36:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:27.028 21:36:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.028 21:36:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.028 21:36:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.028 21:36:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:27.028 21:36:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:27.028 21:36:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:27.028 21:36:49 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:27.028 21:36:49 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:27.028 21:36:49 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:27.028 21:36:49 -- host/perf.sh@17 -- # nvmftestinit 00:21:27.028 21:36:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:27.028 21:36:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.028 21:36:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:27.028 21:36:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:27.028 21:36:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:27.028 21:36:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.028 21:36:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.028 21:36:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.028 21:36:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:27.028 21:36:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:27.028 21:36:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:27.028 21:36:49 -- common/autotest_common.sh@10 -- # set +x 00:21:33.601 21:36:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:33.601 21:36:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:33.601 21:36:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:33.601 21:36:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:33.601 21:36:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:33.601 21:36:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:33.601 21:36:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:33.601 21:36:55 -- nvmf/common.sh@295 -- # net_devs=() 00:21:33.601 21:36:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:33.601 21:36:55 -- nvmf/common.sh@296 -- # e810=() 00:21:33.601 21:36:55 -- nvmf/common.sh@296 -- # local -ga e810 00:21:33.601 21:36:55 -- nvmf/common.sh@297 -- # x722=() 00:21:33.601 21:36:55 -- nvmf/common.sh@297 -- # local -ga x722 00:21:33.601 21:36:55 -- nvmf/common.sh@298 -- # mlx=() 00:21:33.601 21:36:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:33.601 21:36:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.601 21:36:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.601 21:36:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.601 21:36:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.601 21:36:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.601 21:36:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.601 21:36:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.601 21:36:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.601 21:36:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.601 21:36:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.601 21:36:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.601 21:36:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:33.601 21:36:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:33.601 21:36:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:33.601 21:36:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:33.601 21:36:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:33.601 21:36:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:33.601 21:36:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.601 21:36:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:33.601 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:33.601 21:36:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.601 21:36:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.601 21:36:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.601 21:36:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.601 21:36:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.601 21:36:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.601 21:36:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:33.601 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:33.601 21:36:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.601 21:36:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.601 21:36:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.601 21:36:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.601 21:36:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.601 21:36:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:33.601 21:36:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:33.601 21:36:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:33.601 21:36:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.601 21:36:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.601 21:36:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:33.601 21:36:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.601 21:36:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:33.601 Found net devices under 0000:af:00.0: cvl_0_0 00:21:33.601 21:36:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.601 21:36:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.601 21:36:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.601 21:36:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:33.601 21:36:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.601 21:36:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:33.601 Found net devices under 0000:af:00.1: cvl_0_1 00:21:33.601 21:36:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.601 21:36:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:33.601 21:36:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:33.601 21:36:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:33.601 21:36:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:33.601 21:36:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:33.601 21:36:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.601 21:36:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.601 21:36:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.601 21:36:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:33.601 21:36:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:33.601 21:36:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:33.601 21:36:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:33.601 21:36:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:33.601 21:36:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.601 21:36:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:33.601 21:36:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:33.601 21:36:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:33.601 21:36:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:33.601 21:36:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:33.601 21:36:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:33.601 21:36:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:33.601 21:36:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:33.601 21:36:56 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:33.601 21:36:56 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:33.601 21:36:56 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:33.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:21:33.601 00:21:33.601 --- 10.0.0.2 ping statistics --- 00:21:33.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.602 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:21:33.602 21:36:56 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:33.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:21:33.602 00:21:33.602 --- 10.0.0.1 ping statistics --- 00:21:33.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.602 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:21:33.602 21:36:56 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.602 21:36:56 -- nvmf/common.sh@411 -- # return 0 00:21:33.602 21:36:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:33.602 21:36:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.602 21:36:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:33.602 21:36:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:33.602 21:36:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.602 21:36:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:33.602 21:36:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:33.602 21:36:56 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:33.602 21:36:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:33.602 21:36:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:33.602 21:36:56 -- common/autotest_common.sh@10 -- # set +x 00:21:33.602 21:36:56 -- nvmf/common.sh@470 -- # nvmfpid=2933163 00:21:33.602 21:36:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:33.602 21:36:56 -- nvmf/common.sh@471 -- # waitforlisten 2933163 00:21:33.602 21:36:56 -- common/autotest_common.sh@817 -- # '[' -z 2933163 ']' 00:21:33.602 21:36:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.602 21:36:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:33.602 21:36:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.602 21:36:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:33.602 21:36:56 -- common/autotest_common.sh@10 -- # set +x 00:21:33.602 [2024-04-24 21:36:56.232199] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:21:33.602 [2024-04-24 21:36:56.232255] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.602 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.602 [2024-04-24 21:36:56.307514] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:33.602 [2024-04-24 21:36:56.380613] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.602 [2024-04-24 21:36:56.380647] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.602 [2024-04-24 21:36:56.380656] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.602 [2024-04-24 21:36:56.380665] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.602 [2024-04-24 21:36:56.380671] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.602 [2024-04-24 21:36:56.380717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.602 [2024-04-24 21:36:56.380812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.602 [2024-04-24 21:36:56.380896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:33.602 [2024-04-24 21:36:56.380898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.175 21:36:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:34.175 21:36:57 -- common/autotest_common.sh@850 -- # return 0 00:21:34.175 21:36:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:34.175 21:36:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:34.175 21:36:57 -- common/autotest_common.sh@10 -- # set +x 00:21:34.434 21:36:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.434 21:36:57 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:34.434 21:36:57 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:37.726 21:37:00 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:37.726 21:37:00 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:37.726 21:37:00 -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:21:37.726 21:37:00 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:37.726 21:37:00 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:37.726 21:37:00 -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:21:37.726 21:37:00 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:37.726 21:37:00 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:37.726 21:37:00 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:37.986 [2024-04-24 21:37:00.666006] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.986 21:37:00 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:38.245 21:37:00 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:38.245 21:37:00 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:38.245 21:37:01 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:38.245 21:37:01 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:38.505 21:37:01 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:38.765 [2024-04-24 21:37:01.400790] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.765 21:37:01 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:38.765 21:37:01 -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:21:38.765 21:37:01 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:21:38.765 21:37:01 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:38.765 21:37:01 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:21:40.146 Initializing NVMe Controllers 00:21:40.146 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:21:40.146 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:21:40.146 Initialization complete. Launching workers. 00:21:40.146 ======================================================== 00:21:40.146 Latency(us) 00:21:40.146 Device Information : IOPS MiB/s Average min max 00:21:40.146 PCIE (0000:d8:00.0) NSID 1 from core 0: 102886.81 401.90 310.65 23.98 7230.74 00:21:40.146 ======================================================== 00:21:40.146 Total : 102886.81 401.90 310.65 23.98 7230.74 00:21:40.146 00:21:40.146 21:37:02 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:40.146 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.523 Initializing NVMe Controllers 00:21:41.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:41.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:41.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:41.523 Initialization complete. Launching workers. 00:21:41.523 ======================================================== 00:21:41.523 Latency(us) 00:21:41.523 Device Information : IOPS MiB/s Average min max 00:21:41.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 90.77 0.35 11394.50 536.41 45549.63 00:21:41.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.87 0.20 20200.22 6984.12 48877.35 00:21:41.523 ======================================================== 00:21:41.523 Total : 142.63 0.56 14596.58 536.41 48877.35 00:21:41.523 00:21:41.524 21:37:04 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:41.783 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.163 Initializing NVMe Controllers 00:21:43.163 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:43.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:43.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:43.163 Initialization complete. Launching workers. 00:21:43.163 ======================================================== 00:21:43.163 Latency(us) 00:21:43.163 Device Information : IOPS MiB/s Average min max 00:21:43.163 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8330.27 32.54 3841.63 777.17 8535.23 00:21:43.163 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3792.01 14.81 8453.27 5659.51 16125.34 00:21:43.163 ======================================================== 00:21:43.163 Total : 12122.28 47.35 5284.21 777.17 16125.34 00:21:43.163 00:21:43.163 21:37:05 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:43.163 21:37:05 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:43.163 21:37:05 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:43.163 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.700 Initializing NVMe Controllers 00:21:45.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:45.700 Controller IO queue size 128, less than required. 00:21:45.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:45.700 Controller IO queue size 128, less than required. 00:21:45.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:45.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:45.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:45.700 Initialization complete. Launching workers. 00:21:45.700 ======================================================== 00:21:45.700 Latency(us) 00:21:45.700 Device Information : IOPS MiB/s Average min max 00:21:45.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 906.54 226.63 144720.56 74096.05 251522.75 00:21:45.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 574.39 143.60 238097.05 71780.25 420109.95 00:21:45.700 ======================================================== 00:21:45.700 Total : 1480.93 370.23 180937.41 71780.25 420109.95 00:21:45.700 00:21:45.700 21:37:08 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:45.700 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.700 No valid NVMe controllers or AIO or URING devices found 00:21:45.700 Initializing NVMe Controllers 00:21:45.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:45.700 Controller IO queue size 128, less than required. 00:21:45.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:45.700 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:45.700 Controller IO queue size 128, less than required. 00:21:45.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:45.700 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:45.700 WARNING: Some requested NVMe devices were skipped 00:21:45.700 21:37:08 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:45.700 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.998 Initializing NVMe Controllers 00:21:48.998 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:48.998 Controller IO queue size 128, less than required. 00:21:48.998 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:48.998 Controller IO queue size 128, less than required. 00:21:48.998 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:48.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:48.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:48.998 Initialization complete. Launching workers. 00:21:48.998 00:21:48.998 ==================== 00:21:48.998 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:48.998 TCP transport: 00:21:48.998 polls: 51742 00:21:48.998 idle_polls: 14482 00:21:48.998 sock_completions: 37260 00:21:48.998 nvme_completions: 3343 00:21:48.998 submitted_requests: 4974 00:21:48.998 queued_requests: 1 00:21:48.998 00:21:48.998 ==================== 00:21:48.998 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:48.998 TCP transport: 00:21:48.998 polls: 57871 00:21:48.998 idle_polls: 18072 00:21:48.998 sock_completions: 39799 00:21:48.998 nvme_completions: 3325 00:21:48.998 submitted_requests: 4976 00:21:48.998 queued_requests: 1 00:21:48.998 ======================================================== 00:21:48.998 Latency(us) 00:21:48.998 Device Information : IOPS MiB/s Average min max 00:21:48.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 834.59 208.65 157846.79 78826.37 255446.22 00:21:48.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 830.09 207.52 158464.60 77473.48 219446.63 00:21:48.998 ======================================================== 00:21:48.998 Total : 1664.68 416.17 158154.86 77473.48 255446.22 00:21:48.998 00:21:48.998 21:37:11 -- host/perf.sh@66 -- # sync 00:21:48.998 21:37:11 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.998 21:37:11 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:48.998 21:37:11 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:48.998 21:37:11 -- host/perf.sh@114 -- # nvmftestfini 00:21:48.998 21:37:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:48.998 21:37:11 -- nvmf/common.sh@117 -- # sync 00:21:48.998 21:37:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:48.998 21:37:11 -- nvmf/common.sh@120 -- # set +e 00:21:48.998 21:37:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:48.998 21:37:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:48.998 rmmod nvme_tcp 00:21:48.998 rmmod nvme_fabrics 00:21:48.998 rmmod nvme_keyring 00:21:48.998 21:37:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:48.998 21:37:11 -- nvmf/common.sh@124 -- # set -e 00:21:48.998 21:37:11 -- nvmf/common.sh@125 -- # return 0 00:21:48.998 21:37:11 -- nvmf/common.sh@478 -- # '[' -n 2933163 ']' 00:21:48.998 21:37:11 -- nvmf/common.sh@479 -- # killprocess 2933163 00:21:48.998 21:37:11 -- common/autotest_common.sh@936 -- # '[' -z 2933163 ']' 00:21:48.998 21:37:11 -- common/autotest_common.sh@940 -- # kill -0 2933163 00:21:48.998 21:37:11 -- common/autotest_common.sh@941 -- # uname 00:21:48.998 21:37:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:48.998 21:37:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2933163 00:21:48.998 21:37:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:48.998 21:37:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:48.998 21:37:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2933163' 00:21:48.998 killing process with pid 2933163 00:21:48.998 21:37:11 -- common/autotest_common.sh@955 -- # kill 2933163 00:21:48.998 21:37:11 -- common/autotest_common.sh@960 -- # wait 2933163 00:21:50.900 21:37:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:50.900 21:37:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:50.900 21:37:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:50.900 21:37:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:50.900 21:37:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:50.900 21:37:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.900 21:37:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.900 21:37:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.435 21:37:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:53.435 00:21:53.435 real 0m26.226s 00:21:53.435 user 1m8.977s 00:21:53.435 sys 0m8.526s 00:21:53.435 21:37:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:53.435 21:37:15 -- common/autotest_common.sh@10 -- # set +x 00:21:53.435 ************************************ 00:21:53.435 END TEST nvmf_perf 00:21:53.435 ************************************ 00:21:53.435 21:37:15 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:53.435 21:37:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:53.435 21:37:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:53.435 21:37:15 -- common/autotest_common.sh@10 -- # set +x 00:21:53.435 ************************************ 00:21:53.435 START TEST nvmf_fio_host 00:21:53.435 ************************************ 00:21:53.435 21:37:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:53.435 * Looking for test storage... 00:21:53.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:53.435 21:37:16 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.435 21:37:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.435 21:37:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.435 21:37:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.435 21:37:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.435 21:37:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.435 21:37:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.436 21:37:16 -- paths/export.sh@5 -- # export PATH 00:21:53.436 21:37:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.436 21:37:16 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.436 21:37:16 -- nvmf/common.sh@7 -- # uname -s 00:21:53.436 21:37:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.436 21:37:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.436 21:37:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.436 21:37:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.436 21:37:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.436 21:37:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.436 21:37:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.436 21:37:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.436 21:37:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.436 21:37:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.436 21:37:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:53.436 21:37:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:53.436 21:37:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.436 21:37:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.436 21:37:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.436 21:37:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.436 21:37:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.436 21:37:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.436 21:37:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.436 21:37:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.436 21:37:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.436 21:37:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.436 21:37:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.436 21:37:16 -- paths/export.sh@5 -- # export PATH 00:21:53.436 21:37:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.436 21:37:16 -- nvmf/common.sh@47 -- # : 0 00:21:53.436 21:37:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:53.436 21:37:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:53.436 21:37:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.436 21:37:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.436 21:37:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.436 21:37:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:53.436 21:37:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:53.436 21:37:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:53.436 21:37:16 -- host/fio.sh@12 -- # nvmftestinit 00:21:53.436 21:37:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:53.436 21:37:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.436 21:37:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:53.436 21:37:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:53.436 21:37:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:53.436 21:37:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.436 21:37:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.436 21:37:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.436 21:37:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:53.436 21:37:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:53.436 21:37:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:53.436 21:37:16 -- common/autotest_common.sh@10 -- # set +x 00:22:00.075 21:37:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:00.075 21:37:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:00.075 21:37:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:00.075 21:37:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:00.075 21:37:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:00.075 21:37:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:00.075 21:37:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:00.075 21:37:22 -- nvmf/common.sh@295 -- # net_devs=() 00:22:00.075 21:37:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:00.075 21:37:22 -- nvmf/common.sh@296 -- # e810=() 00:22:00.075 21:37:22 -- nvmf/common.sh@296 -- # local -ga e810 00:22:00.075 21:37:22 -- nvmf/common.sh@297 -- # x722=() 00:22:00.075 21:37:22 -- nvmf/common.sh@297 -- # local -ga x722 00:22:00.075 21:37:22 -- nvmf/common.sh@298 -- # mlx=() 00:22:00.075 21:37:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:00.075 21:37:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.075 21:37:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.075 21:37:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.075 21:37:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.075 21:37:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.075 21:37:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.075 21:37:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.075 21:37:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.075 21:37:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.075 21:37:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.075 21:37:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.075 21:37:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:00.075 21:37:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:00.075 21:37:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:00.075 21:37:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:00.075 21:37:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:00.075 21:37:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:00.075 21:37:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.075 21:37:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:00.075 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:00.075 21:37:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.075 21:37:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.075 21:37:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.075 21:37:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.075 21:37:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.075 21:37:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.075 21:37:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:00.075 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:00.075 21:37:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.075 21:37:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.075 21:37:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.075 21:37:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.075 21:37:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.075 21:37:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:00.075 21:37:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:00.075 21:37:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:00.075 21:37:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.075 21:37:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.075 21:37:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:00.075 21:37:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.075 21:37:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:00.075 Found net devices under 0000:af:00.0: cvl_0_0 00:22:00.075 21:37:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.075 21:37:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.075 21:37:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.075 21:37:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:00.075 21:37:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.075 21:37:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:00.075 Found net devices under 0000:af:00.1: cvl_0_1 00:22:00.075 21:37:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.075 21:37:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:00.075 21:37:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:00.075 21:37:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:00.075 21:37:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:00.075 21:37:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:00.075 21:37:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.075 21:37:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.075 21:37:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.075 21:37:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:00.075 21:37:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.075 21:37:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.075 21:37:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:00.075 21:37:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.075 21:37:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.075 21:37:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:00.075 21:37:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:00.075 21:37:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.075 21:37:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:00.075 21:37:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:00.075 21:37:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:00.075 21:37:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:00.075 21:37:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.075 21:37:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.075 21:37:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.075 21:37:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:00.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:22:00.075 00:22:00.075 --- 10.0.0.2 ping statistics --- 00:22:00.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.075 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:22:00.075 21:37:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:22:00.075 00:22:00.075 --- 10.0.0.1 ping statistics --- 00:22:00.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.075 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:22:00.075 21:37:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.075 21:37:22 -- nvmf/common.sh@411 -- # return 0 00:22:00.075 21:37:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:00.075 21:37:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.075 21:37:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:00.075 21:37:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:00.075 21:37:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.075 21:37:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:00.075 21:37:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:00.075 21:37:22 -- host/fio.sh@14 -- # [[ y != y ]] 00:22:00.075 21:37:22 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:22:00.075 21:37:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:00.075 21:37:22 -- common/autotest_common.sh@10 -- # set +x 00:22:00.075 21:37:22 -- host/fio.sh@22 -- # nvmfpid=2939846 00:22:00.075 21:37:22 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:00.075 21:37:22 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:00.075 21:37:22 -- host/fio.sh@26 -- # waitforlisten 2939846 00:22:00.075 21:37:22 -- common/autotest_common.sh@817 -- # '[' -z 2939846 ']' 00:22:00.075 21:37:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.075 21:37:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:00.075 21:37:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.075 21:37:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:00.075 21:37:22 -- common/autotest_common.sh@10 -- # set +x 00:22:00.075 [2024-04-24 21:37:22.825872] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:22:00.075 [2024-04-24 21:37:22.825918] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.075 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.075 [2024-04-24 21:37:22.902469] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:00.359 [2024-04-24 21:37:22.986432] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.359 [2024-04-24 21:37:22.986470] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.359 [2024-04-24 21:37:22.986479] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.359 [2024-04-24 21:37:22.986487] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.359 [2024-04-24 21:37:22.986494] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.359 [2024-04-24 21:37:22.986538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.359 [2024-04-24 21:37:22.986822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.359 [2024-04-24 21:37:22.986904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.359 [2024-04-24 21:37:22.986906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.925 21:37:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:00.925 21:37:23 -- common/autotest_common.sh@850 -- # return 0 00:22:00.925 21:37:23 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:00.926 21:37:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.926 21:37:23 -- common/autotest_common.sh@10 -- # set +x 00:22:00.926 [2024-04-24 21:37:23.634131] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.926 21:37:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.926 21:37:23 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:22:00.926 21:37:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:00.926 21:37:23 -- common/autotest_common.sh@10 -- # set +x 00:22:00.926 21:37:23 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:00.926 21:37:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.926 21:37:23 -- common/autotest_common.sh@10 -- # set +x 00:22:00.926 Malloc1 00:22:00.926 21:37:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.926 21:37:23 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:00.926 21:37:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.926 21:37:23 -- common/autotest_common.sh@10 -- # set +x 00:22:00.926 21:37:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.926 21:37:23 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:00.926 21:37:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.926 21:37:23 -- common/autotest_common.sh@10 -- # set +x 00:22:00.926 21:37:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.926 21:37:23 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.926 21:37:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.926 21:37:23 -- common/autotest_common.sh@10 -- # set +x 00:22:00.926 [2024-04-24 21:37:23.728776] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.926 21:37:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.926 21:37:23 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:00.926 21:37:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.926 21:37:23 -- common/autotest_common.sh@10 -- # set +x 00:22:00.926 21:37:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.926 21:37:23 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:00.926 21:37:23 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:00.926 21:37:23 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:00.926 21:37:23 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:00.926 21:37:23 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:00.926 21:37:23 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:00.926 21:37:23 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:00.926 21:37:23 -- common/autotest_common.sh@1327 -- # shift 00:22:00.926 21:37:23 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:00.926 21:37:23 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:00.926 21:37:23 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:00.926 21:37:23 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:00.926 21:37:23 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:00.926 21:37:23 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:00.926 21:37:23 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:00.926 21:37:23 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:00.926 21:37:23 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:00.926 21:37:23 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:00.926 21:37:23 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:00.926 21:37:23 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:00.926 21:37:23 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:00.926 21:37:23 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:00.926 21:37:23 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:01.492 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:01.492 fio-3.35 00:22:01.492 Starting 1 thread 00:22:01.492 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.028 00:22:04.028 test: (groupid=0, jobs=1): err= 0: pid=2940263: Wed Apr 24 21:37:26 2024 00:22:04.028 read: IOPS=11.4k, BW=44.5MiB/s (46.6MB/s)(89.2MiB/2005msec) 00:22:04.028 slat (nsec): min=1485, max=329428, avg=1730.66, stdev=2702.36 00:22:04.028 clat (usec): min=3341, max=15676, avg=6509.13, stdev=1658.86 00:22:04.028 lat (usec): min=3343, max=15678, avg=6510.86, stdev=1658.97 00:22:04.028 clat percentiles (usec): 00:22:04.028 | 1.00th=[ 4293], 5.00th=[ 4948], 10.00th=[ 5211], 20.00th=[ 5538], 00:22:04.028 | 30.00th=[ 5735], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6259], 00:22:04.028 | 70.00th=[ 6456], 80.00th=[ 6980], 90.00th=[ 8586], 95.00th=[10421], 00:22:04.028 | 99.00th=[13042], 99.50th=[14091], 99.90th=[15270], 99.95th=[15401], 00:22:04.028 | 99.99th=[15664] 00:22:04.028 bw ( KiB/s): min=43168, max=47072, per=99.96%, avg=45536.00, stdev=1859.87, samples=4 00:22:04.028 iops : min=10792, max=11768, avg=11384.00, stdev=464.97, samples=4 00:22:04.028 write: IOPS=11.3k, BW=44.2MiB/s (46.4MB/s)(88.7MiB/2005msec); 0 zone resets 00:22:04.028 slat (nsec): min=1538, max=256855, avg=1816.53, stdev=1963.72 00:22:04.028 clat (usec): min=2050, max=11689, avg=4682.54, stdev=921.87 00:22:04.028 lat (usec): min=2052, max=11713, avg=4684.36, stdev=922.12 00:22:04.028 clat percentiles (usec): 00:22:04.028 | 1.00th=[ 2900], 5.00th=[ 3359], 10.00th=[ 3654], 20.00th=[ 4080], 00:22:04.028 | 30.00th=[ 4359], 40.00th=[ 4490], 50.00th=[ 4621], 60.00th=[ 4817], 00:22:04.028 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5473], 95.00th=[ 6259], 00:22:04.028 | 99.00th=[ 7898], 99.50th=[ 8979], 99.90th=[10683], 99.95th=[10814], 00:22:04.028 | 99.99th=[11600] 00:22:04.028 bw ( KiB/s): min=43640, max=46072, per=99.98%, avg=45276.00, stdev=1116.98, samples=4 00:22:04.028 iops : min=10910, max=11518, avg=11319.00, stdev=279.24, samples=4 00:22:04.028 lat (msec) : 4=9.10%, 10=87.88%, 20=3.02% 00:22:04.028 cpu : usr=62.97%, sys=29.84%, ctx=79, majf=0, minf=4 00:22:04.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:04.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:04.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:04.028 issued rwts: total=22833,22700,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:04.028 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:04.028 00:22:04.028 Run status group 0 (all jobs): 00:22:04.028 READ: bw=44.5MiB/s (46.6MB/s), 44.5MiB/s-44.5MiB/s (46.6MB/s-46.6MB/s), io=89.2MiB (93.5MB), run=2005-2005msec 00:22:04.028 WRITE: bw=44.2MiB/s (46.4MB/s), 44.2MiB/s-44.2MiB/s (46.4MB/s-46.4MB/s), io=88.7MiB (93.0MB), run=2005-2005msec 00:22:04.028 21:37:26 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:04.028 21:37:26 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:04.028 21:37:26 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:04.028 21:37:26 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:04.028 21:37:26 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:04.028 21:37:26 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:04.028 21:37:26 -- common/autotest_common.sh@1327 -- # shift 00:22:04.028 21:37:26 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:04.028 21:37:26 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:04.028 21:37:26 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:04.028 21:37:26 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:04.028 21:37:26 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:04.028 21:37:26 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:04.028 21:37:26 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:04.028 21:37:26 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:04.028 21:37:26 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:04.028 21:37:26 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:04.028 21:37:26 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:04.028 21:37:26 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:04.028 21:37:26 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:04.028 21:37:26 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:04.028 21:37:26 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:04.028 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:04.028 fio-3.35 00:22:04.028 Starting 1 thread 00:22:04.287 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.849 00:22:06.849 test: (groupid=0, jobs=1): err= 0: pid=2940828: Wed Apr 24 21:37:29 2024 00:22:06.849 read: IOPS=9386, BW=147MiB/s (154MB/s)(294MiB/2005msec) 00:22:06.849 slat (nsec): min=2291, max=79336, avg=2676.58, stdev=1227.73 00:22:06.849 clat (usec): min=2631, max=50006, avg=8473.35, stdev=4842.50 00:22:06.849 lat (usec): min=2634, max=50009, avg=8476.02, stdev=4842.80 00:22:06.849 clat percentiles (usec): 00:22:06.849 | 1.00th=[ 3720], 5.00th=[ 4621], 10.00th=[ 5145], 20.00th=[ 5932], 00:22:06.849 | 30.00th=[ 6521], 40.00th=[ 7046], 50.00th=[ 7504], 60.00th=[ 8029], 00:22:06.849 | 70.00th=[ 8717], 80.00th=[ 9634], 90.00th=[11076], 95.00th=[14091], 00:22:06.849 | 99.00th=[29230], 99.50th=[45351], 99.90th=[49021], 99.95th=[49546], 00:22:06.849 | 99.99th=[50070] 00:22:06.849 bw ( KiB/s): min=68160, max=87392, per=49.08%, avg=73712.00, stdev=9147.02, samples=4 00:22:06.849 iops : min= 4260, max= 5462, avg=4607.00, stdev=571.69, samples=4 00:22:06.849 write: IOPS=5588, BW=87.3MiB/s (91.6MB/s)(151MiB/1729msec); 0 zone resets 00:22:06.849 slat (usec): min=27, max=300, avg=29.88, stdev= 6.24 00:22:06.849 clat (usec): min=4116, max=31582, avg=9091.67, stdev=3439.78 00:22:06.849 lat (usec): min=4145, max=31614, avg=9121.56, stdev=3442.69 00:22:06.849 clat percentiles (usec): 00:22:06.849 | 1.00th=[ 5735], 5.00th=[ 6390], 10.00th=[ 6718], 20.00th=[ 7242], 00:22:06.849 | 30.00th=[ 7570], 40.00th=[ 7963], 50.00th=[ 8455], 60.00th=[ 8848], 00:22:06.849 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[11076], 95.00th=[12780], 00:22:06.849 | 99.00th=[27919], 99.50th=[29754], 99.90th=[30278], 99.95th=[30278], 00:22:06.849 | 99.99th=[31589] 00:22:06.849 bw ( KiB/s): min=71488, max=91136, per=85.82%, avg=76736.00, stdev=9619.43, samples=4 00:22:06.849 iops : min= 4468, max= 5696, avg=4796.00, stdev=601.21, samples=4 00:22:06.849 lat (msec) : 4=1.31%, 10=80.34%, 20=15.44%, 50=2.90%, 100=0.01% 00:22:06.849 cpu : usr=79.64%, sys=15.72%, ctx=28, majf=0, minf=1 00:22:06.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:06.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:06.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:06.849 issued rwts: total=18820,9662,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:06.849 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:06.849 00:22:06.849 Run status group 0 (all jobs): 00:22:06.849 READ: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=294MiB (308MB), run=2005-2005msec 00:22:06.849 WRITE: bw=87.3MiB/s (91.6MB/s), 87.3MiB/s-87.3MiB/s (91.6MB/s-91.6MB/s), io=151MiB (158MB), run=1729-1729msec 00:22:06.849 21:37:29 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:06.849 21:37:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:06.849 21:37:29 -- common/autotest_common.sh@10 -- # set +x 00:22:06.849 21:37:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:06.849 21:37:29 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:22:06.849 21:37:29 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:22:06.849 21:37:29 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:22:06.849 21:37:29 -- host/fio.sh@84 -- # nvmftestfini 00:22:06.849 21:37:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:06.849 21:37:29 -- nvmf/common.sh@117 -- # sync 00:22:06.849 21:37:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:06.849 21:37:29 -- nvmf/common.sh@120 -- # set +e 00:22:06.849 21:37:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:06.849 21:37:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:06.849 rmmod nvme_tcp 00:22:06.849 rmmod nvme_fabrics 00:22:06.849 rmmod nvme_keyring 00:22:06.849 21:37:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:06.849 21:37:29 -- nvmf/common.sh@124 -- # set -e 00:22:06.849 21:37:29 -- nvmf/common.sh@125 -- # return 0 00:22:06.849 21:37:29 -- nvmf/common.sh@478 -- # '[' -n 2939846 ']' 00:22:06.849 21:37:29 -- nvmf/common.sh@479 -- # killprocess 2939846 00:22:06.849 21:37:29 -- common/autotest_common.sh@936 -- # '[' -z 2939846 ']' 00:22:06.849 21:37:29 -- common/autotest_common.sh@940 -- # kill -0 2939846 00:22:06.849 21:37:29 -- common/autotest_common.sh@941 -- # uname 00:22:06.849 21:37:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:06.849 21:37:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2939846 00:22:06.849 21:37:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:06.849 21:37:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:06.849 21:37:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2939846' 00:22:06.849 killing process with pid 2939846 00:22:06.849 21:37:29 -- common/autotest_common.sh@955 -- # kill 2939846 00:22:06.849 21:37:29 -- common/autotest_common.sh@960 -- # wait 2939846 00:22:07.108 21:37:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:07.108 21:37:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:07.108 21:37:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:07.108 21:37:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:07.108 21:37:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:07.108 21:37:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.108 21:37:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:07.108 21:37:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.010 21:37:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:09.010 00:22:09.010 real 0m15.926s 00:22:09.010 user 0m48.228s 00:22:09.010 sys 0m7.403s 00:22:09.010 21:37:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:09.010 21:37:31 -- common/autotest_common.sh@10 -- # set +x 00:22:09.010 ************************************ 00:22:09.010 END TEST nvmf_fio_host 00:22:09.010 ************************************ 00:22:09.010 21:37:31 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:09.010 21:37:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:09.010 21:37:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:09.010 21:37:31 -- common/autotest_common.sh@10 -- # set +x 00:22:09.269 ************************************ 00:22:09.269 START TEST nvmf_failover 00:22:09.269 ************************************ 00:22:09.269 21:37:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:09.269 * Looking for test storage... 00:22:09.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:09.269 21:37:32 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:09.528 21:37:32 -- nvmf/common.sh@7 -- # uname -s 00:22:09.528 21:37:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:09.528 21:37:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:09.528 21:37:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:09.528 21:37:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:09.528 21:37:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:09.528 21:37:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:09.528 21:37:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:09.528 21:37:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:09.528 21:37:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:09.528 21:37:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:09.528 21:37:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:09.528 21:37:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:09.528 21:37:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:09.528 21:37:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:09.528 21:37:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:09.528 21:37:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:09.528 21:37:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:09.528 21:37:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:09.528 21:37:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.528 21:37:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.528 21:37:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.528 21:37:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.528 21:37:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.528 21:37:32 -- paths/export.sh@5 -- # export PATH 00:22:09.529 21:37:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.529 21:37:32 -- nvmf/common.sh@47 -- # : 0 00:22:09.529 21:37:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:09.529 21:37:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:09.529 21:37:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:09.529 21:37:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:09.529 21:37:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:09.529 21:37:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:09.529 21:37:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:09.529 21:37:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:09.529 21:37:32 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:09.529 21:37:32 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:09.529 21:37:32 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:09.529 21:37:32 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:09.529 21:37:32 -- host/failover.sh@18 -- # nvmftestinit 00:22:09.529 21:37:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:09.529 21:37:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.529 21:37:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:09.529 21:37:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:09.529 21:37:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:09.529 21:37:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.529 21:37:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.529 21:37:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.529 21:37:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:09.529 21:37:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:09.529 21:37:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:09.529 21:37:32 -- common/autotest_common.sh@10 -- # set +x 00:22:16.098 21:37:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:16.098 21:37:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:16.098 21:37:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:16.098 21:37:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:16.098 21:37:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:16.098 21:37:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:16.098 21:37:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:16.098 21:37:38 -- nvmf/common.sh@295 -- # net_devs=() 00:22:16.098 21:37:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:16.098 21:37:38 -- nvmf/common.sh@296 -- # e810=() 00:22:16.098 21:37:38 -- nvmf/common.sh@296 -- # local -ga e810 00:22:16.098 21:37:38 -- nvmf/common.sh@297 -- # x722=() 00:22:16.098 21:37:38 -- nvmf/common.sh@297 -- # local -ga x722 00:22:16.098 21:37:38 -- nvmf/common.sh@298 -- # mlx=() 00:22:16.098 21:37:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:16.098 21:37:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.098 21:37:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.098 21:37:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.098 21:37:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.098 21:37:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.098 21:37:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.098 21:37:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.098 21:37:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.098 21:37:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.098 21:37:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.098 21:37:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.098 21:37:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:16.098 21:37:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:16.098 21:37:38 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:16.098 21:37:38 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:16.098 21:37:38 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:16.098 21:37:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:16.098 21:37:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.098 21:37:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:16.098 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:16.098 21:37:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.098 21:37:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.098 21:37:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.098 21:37:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.098 21:37:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.098 21:37:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.098 21:37:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:16.098 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:16.098 21:37:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.098 21:37:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.098 21:37:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.098 21:37:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.098 21:37:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.098 21:37:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:16.098 21:37:38 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:16.098 21:37:38 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:16.098 21:37:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.098 21:37:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.098 21:37:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:16.098 21:37:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.098 21:37:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:16.098 Found net devices under 0000:af:00.0: cvl_0_0 00:22:16.098 21:37:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.098 21:37:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.098 21:37:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.098 21:37:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:16.098 21:37:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.098 21:37:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:16.098 Found net devices under 0000:af:00.1: cvl_0_1 00:22:16.098 21:37:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.098 21:37:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:16.098 21:37:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:16.098 21:37:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:16.098 21:37:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:16.098 21:37:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:16.098 21:37:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.098 21:37:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.098 21:37:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:16.098 21:37:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:16.098 21:37:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:16.098 21:37:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:16.098 21:37:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:16.098 21:37:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:16.098 21:37:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.098 21:37:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:16.098 21:37:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:16.098 21:37:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:16.098 21:37:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:16.098 21:37:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:16.098 21:37:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:16.098 21:37:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:16.098 21:37:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:16.098 21:37:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:16.098 21:37:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:16.098 21:37:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:16.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:22:16.358 00:22:16.358 --- 10.0.0.2 ping statistics --- 00:22:16.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.358 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:22:16.358 21:37:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:16.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:22:16.358 00:22:16.358 --- 10.0.0.1 ping statistics --- 00:22:16.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.358 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:22:16.358 21:37:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.358 21:37:38 -- nvmf/common.sh@411 -- # return 0 00:22:16.358 21:37:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:16.358 21:37:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.358 21:37:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:16.358 21:37:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:16.358 21:37:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.358 21:37:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:16.358 21:37:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:16.358 21:37:39 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:16.358 21:37:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:16.358 21:37:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:16.358 21:37:39 -- common/autotest_common.sh@10 -- # set +x 00:22:16.358 21:37:39 -- nvmf/common.sh@470 -- # nvmfpid=2944883 00:22:16.358 21:37:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:16.358 21:37:39 -- nvmf/common.sh@471 -- # waitforlisten 2944883 00:22:16.358 21:37:39 -- common/autotest_common.sh@817 -- # '[' -z 2944883 ']' 00:22:16.358 21:37:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.358 21:37:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:16.358 21:37:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.358 21:37:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:16.358 21:37:39 -- common/autotest_common.sh@10 -- # set +x 00:22:16.358 [2024-04-24 21:37:39.083680] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:22:16.358 [2024-04-24 21:37:39.083726] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.358 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.358 [2024-04-24 21:37:39.157504] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:16.358 [2024-04-24 21:37:39.228987] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.358 [2024-04-24 21:37:39.229024] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.358 [2024-04-24 21:37:39.229033] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.358 [2024-04-24 21:37:39.229042] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.358 [2024-04-24 21:37:39.229049] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.358 [2024-04-24 21:37:39.229147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.358 [2024-04-24 21:37:39.229233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:16.358 [2024-04-24 21:37:39.229235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.293 21:37:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:17.293 21:37:39 -- common/autotest_common.sh@850 -- # return 0 00:22:17.293 21:37:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:17.293 21:37:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:17.293 21:37:39 -- common/autotest_common.sh@10 -- # set +x 00:22:17.293 21:37:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.293 21:37:39 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:17.293 [2024-04-24 21:37:40.085811] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.293 21:37:40 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:17.551 Malloc0 00:22:17.551 21:37:40 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:17.809 21:37:40 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:17.809 21:37:40 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.068 [2024-04-24 21:37:40.822769] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.068 21:37:40 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:18.326 [2024-04-24 21:37:41.003264] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:18.326 21:37:41 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:18.326 [2024-04-24 21:37:41.191871] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:18.585 21:37:41 -- host/failover.sh@31 -- # bdevperf_pid=2945341 00:22:18.585 21:37:41 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:18.585 21:37:41 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:18.585 21:37:41 -- host/failover.sh@34 -- # waitforlisten 2945341 /var/tmp/bdevperf.sock 00:22:18.585 21:37:41 -- common/autotest_common.sh@817 -- # '[' -z 2945341 ']' 00:22:18.585 21:37:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.585 21:37:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:18.585 21:37:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.585 21:37:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:18.585 21:37:41 -- common/autotest_common.sh@10 -- # set +x 00:22:19.519 21:37:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:19.519 21:37:42 -- common/autotest_common.sh@850 -- # return 0 00:22:19.519 21:37:42 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:19.519 NVMe0n1 00:22:19.519 21:37:42 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:20.086 00:22:20.086 21:37:42 -- host/failover.sh@39 -- # run_test_pid=2945587 00:22:20.086 21:37:42 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:20.086 21:37:42 -- host/failover.sh@41 -- # sleep 1 00:22:21.039 21:37:43 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:21.039 [2024-04-24 21:37:43.907724] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.039 [2024-04-24 21:37:43.907777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.039 [2024-04-24 21:37:43.907787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.039 [2024-04-24 21:37:43.907796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.039 [2024-04-24 21:37:43.907809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.039 [2024-04-24 21:37:43.907818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.039 [2024-04-24 21:37:43.907826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.907995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908084] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908102] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908214] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908222] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908255] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908352] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908369] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908386] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908411] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.040 [2024-04-24 21:37:43.908436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278690 is same with the state(5) to be set 00:22:21.340 21:37:43 -- host/failover.sh@45 -- # sleep 3 00:22:24.645 21:37:46 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:24.645 00:22:24.645 21:37:47 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:24.645 [2024-04-24 21:37:47.508277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2279bf0 is same with the state(5) to be set 00:22:24.645 [2024-04-24 21:37:47.508324] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2279bf0 is same with the state(5) to be set 00:22:24.645 [2024-04-24 21:37:47.508334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2279bf0 is same with the state(5) to be set 00:22:24.645 [2024-04-24 21:37:47.508343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2279bf0 is same with the state(5) to be set 00:22:24.645 [2024-04-24 21:37:47.508357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2279bf0 is same with the state(5) to be set 00:22:24.645 [2024-04-24 21:37:47.508366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2279bf0 is same with the state(5) to be set 00:22:24.645 [2024-04-24 21:37:47.508375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2279bf0 is same with the state(5) to be set 00:22:24.645 [2024-04-24 21:37:47.508384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2279bf0 is same with the state(5) to be set 00:22:24.645 [2024-04-24 21:37:47.508392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2279bf0 is same with the state(5) to be set 00:22:24.645 [2024-04-24 21:37:47.508400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2279bf0 is same with the state(5) to be set 00:22:24.645 [2024-04-24 21:37:47.508409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2279bf0 is same with the state(5) to be set 00:22:24.645 [2024-04-24 21:37:47.508418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2279bf0 is same with the state(5) to be set 00:22:24.645 [2024-04-24 21:37:47.508426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2279bf0 is same with the state(5) to be set 00:22:24.645 [2024-04-24 21:37:47.508435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2279bf0 is same with the state(5) to be set 00:22:24.645 [2024-04-24 21:37:47.508443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2279bf0 is same with the state(5) to be set 00:22:24.903 21:37:47 -- host/failover.sh@50 -- # sleep 3 00:22:28.186 21:37:50 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.186 [2024-04-24 21:37:50.705681] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.186 21:37:50 -- host/failover.sh@55 -- # sleep 1 00:22:29.121 21:37:51 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:29.121 [2024-04-24 21:37:51.899519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.121 [2024-04-24 21:37:51.899566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.121 [2024-04-24 21:37:51.899576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899637] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899654] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899721] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899748] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899757] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899838] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.899997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900091] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900279] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.122 [2024-04-24 21:37:51.900305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.123 [2024-04-24 21:37:51.900315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.123 [2024-04-24 21:37:51.900323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.123 [2024-04-24 21:37:51.900331] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.123 [2024-04-24 21:37:51.900340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.123 [2024-04-24 21:37:51.900348] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.123 [2024-04-24 21:37:51.900357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227a2b0 is same with the state(5) to be set 00:22:29.123 21:37:51 -- host/failover.sh@59 -- # wait 2945587 00:22:35.689 0 00:22:35.689 21:37:57 -- host/failover.sh@61 -- # killprocess 2945341 00:22:35.689 21:37:57 -- common/autotest_common.sh@936 -- # '[' -z 2945341 ']' 00:22:35.689 21:37:57 -- common/autotest_common.sh@940 -- # kill -0 2945341 00:22:35.689 21:37:57 -- common/autotest_common.sh@941 -- # uname 00:22:35.689 21:37:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:35.689 21:37:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2945341 00:22:35.689 21:37:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:35.689 21:37:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:35.689 21:37:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2945341' 00:22:35.689 killing process with pid 2945341 00:22:35.689 21:37:57 -- common/autotest_common.sh@955 -- # kill 2945341 00:22:35.689 21:37:57 -- common/autotest_common.sh@960 -- # wait 2945341 00:22:35.689 21:37:58 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:35.689 [2024-04-24 21:37:41.265562] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:22:35.689 [2024-04-24 21:37:41.265618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2945341 ] 00:22:35.689 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.689 [2024-04-24 21:37:41.336453] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.689 [2024-04-24 21:37:41.406015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.689 Running I/O for 15 seconds... 00:22:35.689 [2024-04-24 21:37:43.908816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.689 [2024-04-24 21:37:43.908854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.689 [2024-04-24 21:37:43.908872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.689 [2024-04-24 21:37:43.908882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.689 [2024-04-24 21:37:43.908894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.689 [2024-04-24 21:37:43.908903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.689 [2024-04-24 21:37:43.908914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.689 [2024-04-24 21:37:43.908923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.689 [2024-04-24 21:37:43.908934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.689 [2024-04-24 21:37:43.908943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.689 [2024-04-24 21:37:43.908954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.689 [2024-04-24 21:37:43.908963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.908973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.908982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.908993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.690 [2024-04-24 21:37:43.909522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.690 [2024-04-24 21:37:43.909544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.690 [2024-04-24 21:37:43.909566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.690 [2024-04-24 21:37:43.909592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.690 [2024-04-24 21:37:43.909620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.690 [2024-04-24 21:37:43.909650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.690 [2024-04-24 21:37:43.909678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.690 [2024-04-24 21:37:43.909706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.690 [2024-04-24 21:37:43.909735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.909984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.909994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.910005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.910014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.910025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.910034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.910045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.910055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.910066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.910075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.910086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.910096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.910107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.910116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.910127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.910137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.910148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.910157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.910168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.910177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.910189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.910198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.910209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.690 [2024-04-24 21:37:43.910219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.690 [2024-04-24 21:37:43.910230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.691 [2024-04-24 21:37:43.910645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.691 [2024-04-24 21:37:43.910666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.691 [2024-04-24 21:37:43.910687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.691 [2024-04-24 21:37:43.910708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.691 [2024-04-24 21:37:43.910729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.691 [2024-04-24 21:37:43.910750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.691 [2024-04-24 21:37:43.910776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.691 [2024-04-24 21:37:43.910796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.910985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.910996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.911005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.911017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.911026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.911039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.911048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.911059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.911069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.911080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.911090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.911101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.911115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.911130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.911142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.911156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.911168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.911182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.911195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.911209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.911221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.911238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.911251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.911264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.911276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.911290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.911302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.911316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.911328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.911342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.911356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.911370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.911384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.911397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.911409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.911424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.911436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.911455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.691 [2024-04-24 21:37:43.911466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.691 [2024-04-24 21:37:43.911477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:43.911490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:43.911508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:43.911523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:43.911534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:43.911545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:43.911557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:43.911567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:43.911578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:43.911588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:43.911599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:43.911608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:43.911620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:43.911629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:43.911640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:43.911650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:43.911661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:43.911672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:43.911684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:43.911693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:43.911704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:43.911714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:43.911725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239a550 is same with the state(5) to be set 00:22:35.692 [2024-04-24 21:37:43.911739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.692 [2024-04-24 21:37:43.911747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.692 [2024-04-24 21:37:43.911757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94200 len:8 PRP1 0x0 PRP2 0x0 00:22:35.692 [2024-04-24 21:37:43.911766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:43.911816] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x239a550 was disconnected and freed. reset controller. 00:22:35.692 [2024-04-24 21:37:43.911832] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:35.692 [2024-04-24 21:37:43.911867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.692 [2024-04-24 21:37:43.911882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:43.911896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.692 [2024-04-24 21:37:43.911911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:43.911925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.692 [2024-04-24 21:37:43.911939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:43.911953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.692 [2024-04-24 21:37:43.911968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:43.911984] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:35.692 [2024-04-24 21:37:43.914870] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:35.692 [2024-04-24 21:37:43.914904] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237b690 (9): Bad file descriptor 00:22:35.692 [2024-04-24 21:37:44.072173] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:35.692 [2024-04-24 21:37:47.508663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.508701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.508719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.508732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.508744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.508753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.508764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:47.508774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.508785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:47.508794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.508805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:47.508814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.508824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:47.508833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.508844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:47.508853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.508863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:47.508873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.508883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:47.508893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.508903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:47.508913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.508923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.508933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.508943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.508953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.508964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.508973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.508986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.508996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.509006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.509015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.509026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.509036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.509046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.509055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.509066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.509075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.509085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.509095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.509106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.509115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.509126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.509135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.509146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.509155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.509166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.509176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.509187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.509196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.509207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.509217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.509228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.509240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.509251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.509260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.509271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.692 [2024-04-24 21:37:47.509281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.509292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:47.509302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.509313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:47.509322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.692 [2024-04-24 21:37:47.509333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.692 [2024-04-24 21:37:47.509343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.509364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.509384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.509405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.509426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:88112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.509446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.509472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.509493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.509516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.509537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.509557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.509578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.509597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.509987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.509997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.510008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.510019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.510028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.510039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.510048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.510058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.510067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.510078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.510087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.510097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.510106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.510117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.510127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.510138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.510147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.510157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.510166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.510177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.510186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.510196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.510205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.510216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.510225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.510236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.510244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.510259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.510268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.510278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.510288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.510298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.693 [2024-04-24 21:37:47.510307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.693 [2024-04-24 21:37:47.510318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.693 [2024-04-24 21:37:47.510327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.694 [2024-04-24 21:37:47.510680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.694 [2024-04-24 21:37:47.510701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.694 [2024-04-24 21:37:47.510722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.694 [2024-04-24 21:37:47.510743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.694 [2024-04-24 21:37:47.510764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.694 [2024-04-24 21:37:47.510786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.694 [2024-04-24 21:37:47.510807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:88472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.510985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.510995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:88504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.511004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.511015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.511023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.511036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.511045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.511056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.511065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.511075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:88536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.511084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.511094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.511104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.511114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.511123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.511133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.511142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.511153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.511162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.511173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.511181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.511192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.511200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.511211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.511220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.511231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.511240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.511250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.694 [2024-04-24 21:37:47.511259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.511269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239c570 is same with the state(5) to be set 00:22:35.694 [2024-04-24 21:37:47.511282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.694 [2024-04-24 21:37:47.511291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.694 [2024-04-24 21:37:47.511301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88616 len:8 PRP1 0x0 PRP2 0x0 00:22:35.694 [2024-04-24 21:37:47.511310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.511354] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x239c570 was disconnected and freed. reset controller. 00:22:35.694 [2024-04-24 21:37:47.511365] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:35.694 [2024-04-24 21:37:47.511388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.694 [2024-04-24 21:37:47.511398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.511408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.694 [2024-04-24 21:37:47.511417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.511426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.694 [2024-04-24 21:37:47.511435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.511444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.694 [2024-04-24 21:37:47.511461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.694 [2024-04-24 21:37:47.511475] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:35.694 [2024-04-24 21:37:47.514144] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:35.694 [2024-04-24 21:37:47.514175] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237b690 (9): Bad file descriptor 00:22:35.694 [2024-04-24 21:37:47.675825] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:35.694 [2024-04-24 21:37:51.900550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.900981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.900992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.695 [2024-04-24 21:37:51.901707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.695 [2024-04-24 21:37:51.901717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.901726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.901737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.901747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.901757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.901766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.901777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.901786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.901797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.901806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.901817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.901826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.901836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.901845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.901855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.901864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.901875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.901884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.901895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.901904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.901915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.901924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.901934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.901945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.901955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.901964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.901974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.901984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.901994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.902282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.902301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.902321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.902341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.902360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.902379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.902399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.902418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.902437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.902462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.902482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:35.696 [2024-04-24 21:37:51.902502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.902987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.902996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.903007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.903016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.903026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.903035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.696 [2024-04-24 21:37:51.903051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.696 [2024-04-24 21:37:51.903060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.697 [2024-04-24 21:37:51.903071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.697 [2024-04-24 21:37:51.903080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.697 [2024-04-24 21:37:51.903090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.697 [2024-04-24 21:37:51.903099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.697 [2024-04-24 21:37:51.903110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.697 [2024-04-24 21:37:51.903119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.697 [2024-04-24 21:37:51.903129] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e4e0 is same with the state(5) to be set 00:22:35.697 [2024-04-24 21:37:51.903141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:35.697 [2024-04-24 21:37:51.903148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:35.697 [2024-04-24 21:37:51.903158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23592 len:8 PRP1 0x0 PRP2 0x0 00:22:35.697 [2024-04-24 21:37:51.903167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.697 [2024-04-24 21:37:51.903212] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x239e4e0 was disconnected and freed. reset controller. 00:22:35.697 [2024-04-24 21:37:51.903223] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:35.697 [2024-04-24 21:37:51.903247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.697 [2024-04-24 21:37:51.903260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.697 [2024-04-24 21:37:51.903270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.697 [2024-04-24 21:37:51.903280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.697 [2024-04-24 21:37:51.903289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.697 [2024-04-24 21:37:51.903299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.697 [2024-04-24 21:37:51.903308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.697 [2024-04-24 21:37:51.903317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.697 [2024-04-24 21:37:51.903326] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:35.697 [2024-04-24 21:37:51.906011] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:35.697 [2024-04-24 21:37:51.906044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237b690 (9): Bad file descriptor 00:22:35.697 [2024-04-24 21:37:52.020372] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:35.697 00:22:35.697 Latency(us) 00:22:35.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.697 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:35.697 Verification LBA range: start 0x0 length 0x4000 00:22:35.697 NVMe0n1 : 15.00 11285.16 44.08 1425.47 0.00 10049.83 1389.36 30408.70 00:22:35.697 =================================================================================================================== 00:22:35.697 Total : 11285.16 44.08 1425.47 0.00 10049.83 1389.36 30408.70 00:22:35.697 Received shutdown signal, test time was about 15.000000 seconds 00:22:35.697 00:22:35.697 Latency(us) 00:22:35.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.697 =================================================================================================================== 00:22:35.697 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:35.697 21:37:58 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:35.697 21:37:58 -- host/failover.sh@65 -- # count=3 00:22:35.697 21:37:58 -- host/failover.sh@67 -- # (( count != 3 )) 00:22:35.697 21:37:58 -- host/failover.sh@73 -- # bdevperf_pid=2948115 00:22:35.697 21:37:58 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:35.697 21:37:58 -- host/failover.sh@75 -- # waitforlisten 2948115 /var/tmp/bdevperf.sock 00:22:35.697 21:37:58 -- common/autotest_common.sh@817 -- # '[' -z 2948115 ']' 00:22:35.697 21:37:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:35.697 21:37:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:35.697 21:37:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:35.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:35.697 21:37:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:35.697 21:37:58 -- common/autotest_common.sh@10 -- # set +x 00:22:36.263 21:37:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:36.263 21:37:59 -- common/autotest_common.sh@850 -- # return 0 00:22:36.263 21:37:59 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:36.521 [2024-04-24 21:37:59.164234] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:36.521 21:37:59 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:36.521 [2024-04-24 21:37:59.332694] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:36.521 21:37:59 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:36.819 NVMe0n1 00:22:37.077 21:37:59 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:37.335 00:22:37.335 21:38:00 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:37.593 00:22:37.593 21:38:00 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:37.593 21:38:00 -- host/failover.sh@82 -- # grep -q NVMe0 00:22:37.851 21:38:00 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:38.109 21:38:00 -- host/failover.sh@87 -- # sleep 3 00:22:41.392 21:38:03 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:41.392 21:38:03 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:41.392 21:38:03 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:41.392 21:38:03 -- host/failover.sh@90 -- # run_test_pid=2949183 00:22:41.392 21:38:03 -- host/failover.sh@92 -- # wait 2949183 00:22:42.329 0 00:22:42.329 21:38:05 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:42.329 [2024-04-24 21:37:58.202503] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:22:42.329 [2024-04-24 21:37:58.202556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2948115 ] 00:22:42.329 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.329 [2024-04-24 21:37:58.272361] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.329 [2024-04-24 21:37:58.336471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.329 [2024-04-24 21:38:00.779520] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:42.329 [2024-04-24 21:38:00.779574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.329 [2024-04-24 21:38:00.779588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.329 [2024-04-24 21:38:00.779599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.329 [2024-04-24 21:38:00.779609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.329 [2024-04-24 21:38:00.779619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.329 [2024-04-24 21:38:00.779629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.329 [2024-04-24 21:38:00.779639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.329 [2024-04-24 21:38:00.779648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.329 [2024-04-24 21:38:00.779658] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:42.329 [2024-04-24 21:38:00.779687] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:42.329 [2024-04-24 21:38:00.779703] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x67f690 (9): Bad file descriptor 00:22:42.329 [2024-04-24 21:38:00.793812] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:42.329 Running I/O for 1 seconds... 00:22:42.329 00:22:42.329 Latency(us) 00:22:42.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.329 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:42.329 Verification LBA range: start 0x0 length 0x4000 00:22:42.329 NVMe0n1 : 1.01 10929.90 42.69 0.00 0.00 11656.97 2385.51 27682.41 00:22:42.329 =================================================================================================================== 00:22:42.329 Total : 10929.90 42.69 0.00 0.00 11656.97 2385.51 27682.41 00:22:42.329 21:38:05 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:42.329 21:38:05 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:42.587 21:38:05 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:42.587 21:38:05 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:42.587 21:38:05 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:42.846 21:38:05 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:43.104 21:38:05 -- host/failover.sh@101 -- # sleep 3 00:22:46.388 21:38:08 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:46.388 21:38:08 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:46.388 21:38:08 -- host/failover.sh@108 -- # killprocess 2948115 00:22:46.388 21:38:09 -- common/autotest_common.sh@936 -- # '[' -z 2948115 ']' 00:22:46.388 21:38:09 -- common/autotest_common.sh@940 -- # kill -0 2948115 00:22:46.388 21:38:09 -- common/autotest_common.sh@941 -- # uname 00:22:46.388 21:38:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:46.388 21:38:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2948115 00:22:46.388 21:38:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:46.388 21:38:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:46.388 21:38:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2948115' 00:22:46.388 killing process with pid 2948115 00:22:46.388 21:38:09 -- common/autotest_common.sh@955 -- # kill 2948115 00:22:46.388 21:38:09 -- common/autotest_common.sh@960 -- # wait 2948115 00:22:46.388 21:38:09 -- host/failover.sh@110 -- # sync 00:22:46.388 21:38:09 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:46.645 21:38:09 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:46.645 21:38:09 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:46.645 21:38:09 -- host/failover.sh@116 -- # nvmftestfini 00:22:46.645 21:38:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:46.645 21:38:09 -- nvmf/common.sh@117 -- # sync 00:22:46.645 21:38:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:46.645 21:38:09 -- nvmf/common.sh@120 -- # set +e 00:22:46.645 21:38:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:46.645 21:38:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:46.645 rmmod nvme_tcp 00:22:46.645 rmmod nvme_fabrics 00:22:46.645 rmmod nvme_keyring 00:22:46.645 21:38:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:46.645 21:38:09 -- nvmf/common.sh@124 -- # set -e 00:22:46.645 21:38:09 -- nvmf/common.sh@125 -- # return 0 00:22:46.645 21:38:09 -- nvmf/common.sh@478 -- # '[' -n 2944883 ']' 00:22:46.645 21:38:09 -- nvmf/common.sh@479 -- # killprocess 2944883 00:22:46.645 21:38:09 -- common/autotest_common.sh@936 -- # '[' -z 2944883 ']' 00:22:46.645 21:38:09 -- common/autotest_common.sh@940 -- # kill -0 2944883 00:22:46.645 21:38:09 -- common/autotest_common.sh@941 -- # uname 00:22:46.645 21:38:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:46.645 21:38:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2944883 00:22:46.904 21:38:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:46.904 21:38:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:46.904 21:38:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2944883' 00:22:46.904 killing process with pid 2944883 00:22:46.904 21:38:09 -- common/autotest_common.sh@955 -- # kill 2944883 00:22:46.904 21:38:09 -- common/autotest_common.sh@960 -- # wait 2944883 00:22:46.904 21:38:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:46.904 21:38:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:46.904 21:38:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:46.904 21:38:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:46.904 21:38:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:47.162 21:38:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.162 21:38:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.162 21:38:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.068 21:38:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:49.068 00:22:49.068 real 0m39.827s 00:22:49.068 user 2m2.766s 00:22:49.068 sys 0m9.863s 00:22:49.068 21:38:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:49.068 21:38:11 -- common/autotest_common.sh@10 -- # set +x 00:22:49.068 ************************************ 00:22:49.068 END TEST nvmf_failover 00:22:49.068 ************************************ 00:22:49.068 21:38:11 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:49.068 21:38:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:49.068 21:38:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:49.068 21:38:11 -- common/autotest_common.sh@10 -- # set +x 00:22:49.327 ************************************ 00:22:49.327 START TEST nvmf_discovery 00:22:49.327 ************************************ 00:22:49.327 21:38:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:49.327 * Looking for test storage... 00:22:49.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:49.328 21:38:12 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.328 21:38:12 -- nvmf/common.sh@7 -- # uname -s 00:22:49.328 21:38:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.328 21:38:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.328 21:38:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.328 21:38:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.328 21:38:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.328 21:38:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.328 21:38:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.328 21:38:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.328 21:38:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.328 21:38:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.328 21:38:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:49.328 21:38:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:49.328 21:38:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.328 21:38:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.328 21:38:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.328 21:38:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.328 21:38:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.328 21:38:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.328 21:38:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.328 21:38:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.328 21:38:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.328 21:38:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.328 21:38:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.328 21:38:12 -- paths/export.sh@5 -- # export PATH 00:22:49.328 21:38:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.328 21:38:12 -- nvmf/common.sh@47 -- # : 0 00:22:49.328 21:38:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:49.328 21:38:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:49.328 21:38:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.328 21:38:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.328 21:38:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.328 21:38:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:49.328 21:38:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:49.328 21:38:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:49.328 21:38:12 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:49.328 21:38:12 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:49.328 21:38:12 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:49.328 21:38:12 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:49.328 21:38:12 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:49.328 21:38:12 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:49.328 21:38:12 -- host/discovery.sh@25 -- # nvmftestinit 00:22:49.328 21:38:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:49.328 21:38:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.328 21:38:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:49.328 21:38:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:49.328 21:38:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:49.328 21:38:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.328 21:38:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.328 21:38:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.328 21:38:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:49.328 21:38:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:49.328 21:38:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:49.328 21:38:12 -- common/autotest_common.sh@10 -- # set +x 00:22:55.932 21:38:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:55.932 21:38:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:55.932 21:38:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:55.932 21:38:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:55.932 21:38:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:55.932 21:38:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:55.932 21:38:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:55.932 21:38:18 -- nvmf/common.sh@295 -- # net_devs=() 00:22:55.932 21:38:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:55.932 21:38:18 -- nvmf/common.sh@296 -- # e810=() 00:22:55.932 21:38:18 -- nvmf/common.sh@296 -- # local -ga e810 00:22:55.932 21:38:18 -- nvmf/common.sh@297 -- # x722=() 00:22:55.932 21:38:18 -- nvmf/common.sh@297 -- # local -ga x722 00:22:55.932 21:38:18 -- nvmf/common.sh@298 -- # mlx=() 00:22:55.932 21:38:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:55.932 21:38:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.932 21:38:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.932 21:38:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.932 21:38:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.932 21:38:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.932 21:38:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.932 21:38:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.932 21:38:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.932 21:38:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.932 21:38:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.932 21:38:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.932 21:38:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:55.932 21:38:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:55.932 21:38:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:55.932 21:38:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:55.932 21:38:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:55.932 21:38:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:55.932 21:38:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.932 21:38:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:55.932 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:55.932 21:38:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.932 21:38:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.932 21:38:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.932 21:38:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.932 21:38:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.932 21:38:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.932 21:38:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:55.932 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:55.932 21:38:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.932 21:38:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.932 21:38:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.932 21:38:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.932 21:38:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.932 21:38:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:55.932 21:38:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:55.932 21:38:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:55.932 21:38:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.932 21:38:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.932 21:38:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:55.932 21:38:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.932 21:38:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:55.932 Found net devices under 0000:af:00.0: cvl_0_0 00:22:55.932 21:38:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.932 21:38:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.932 21:38:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.932 21:38:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:55.932 21:38:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.932 21:38:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:55.932 Found net devices under 0000:af:00.1: cvl_0_1 00:22:55.932 21:38:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.932 21:38:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:55.932 21:38:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:55.932 21:38:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:55.932 21:38:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:55.932 21:38:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:55.932 21:38:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.932 21:38:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.932 21:38:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.932 21:38:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:55.932 21:38:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:55.932 21:38:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:55.932 21:38:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:55.932 21:38:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:55.932 21:38:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.932 21:38:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:55.932 21:38:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:55.932 21:38:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:55.932 21:38:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:55.932 21:38:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:55.932 21:38:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:55.932 21:38:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:55.932 21:38:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:55.932 21:38:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:55.932 21:38:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:55.932 21:38:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:55.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:22:55.933 00:22:55.933 --- 10.0.0.2 ping statistics --- 00:22:55.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.933 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:22:55.933 21:38:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:55.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:22:55.933 00:22:55.933 --- 10.0.0.1 ping statistics --- 00:22:55.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.933 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:22:55.933 21:38:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.933 21:38:18 -- nvmf/common.sh@411 -- # return 0 00:22:55.933 21:38:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:55.933 21:38:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.933 21:38:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:55.933 21:38:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:55.933 21:38:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.933 21:38:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:55.933 21:38:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:55.933 21:38:18 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:55.933 21:38:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:55.933 21:38:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:55.933 21:38:18 -- common/autotest_common.sh@10 -- # set +x 00:22:55.933 21:38:18 -- nvmf/common.sh@470 -- # nvmfpid=2953698 00:22:55.933 21:38:18 -- nvmf/common.sh@471 -- # waitforlisten 2953698 00:22:55.933 21:38:18 -- common/autotest_common.sh@817 -- # '[' -z 2953698 ']' 00:22:55.933 21:38:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.933 21:38:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:55.933 21:38:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.933 21:38:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:55.933 21:38:18 -- common/autotest_common.sh@10 -- # set +x 00:22:55.933 21:38:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:55.933 [2024-04-24 21:38:18.757513] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:22:55.933 [2024-04-24 21:38:18.757558] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.933 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.192 [2024-04-24 21:38:18.830514] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.192 [2024-04-24 21:38:18.901828] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.192 [2024-04-24 21:38:18.901864] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.192 [2024-04-24 21:38:18.901873] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.192 [2024-04-24 21:38:18.901881] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.192 [2024-04-24 21:38:18.901888] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.192 [2024-04-24 21:38:18.901913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.761 21:38:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:56.761 21:38:19 -- common/autotest_common.sh@850 -- # return 0 00:22:56.761 21:38:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:56.761 21:38:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:56.761 21:38:19 -- common/autotest_common.sh@10 -- # set +x 00:22:56.761 21:38:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.761 21:38:19 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:56.761 21:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.761 21:38:19 -- common/autotest_common.sh@10 -- # set +x 00:22:56.761 [2024-04-24 21:38:19.576445] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.761 21:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.761 21:38:19 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:56.761 21:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.761 21:38:19 -- common/autotest_common.sh@10 -- # set +x 00:22:56.761 [2024-04-24 21:38:19.588599] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:56.761 21:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.761 21:38:19 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:56.761 21:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.761 21:38:19 -- common/autotest_common.sh@10 -- # set +x 00:22:56.761 null0 00:22:56.761 21:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.761 21:38:19 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:56.761 21:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.761 21:38:19 -- common/autotest_common.sh@10 -- # set +x 00:22:56.761 null1 00:22:56.761 21:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.761 21:38:19 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:56.761 21:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.761 21:38:19 -- common/autotest_common.sh@10 -- # set +x 00:22:56.761 21:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.761 21:38:19 -- host/discovery.sh@45 -- # hostpid=2953969 00:22:56.761 21:38:19 -- host/discovery.sh@46 -- # waitforlisten 2953969 /tmp/host.sock 00:22:56.761 21:38:19 -- common/autotest_common.sh@817 -- # '[' -z 2953969 ']' 00:22:56.761 21:38:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:22:56.761 21:38:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:56.761 21:38:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:56.761 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:56.761 21:38:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:56.761 21:38:19 -- common/autotest_common.sh@10 -- # set +x 00:22:56.761 21:38:19 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:57.021 [2024-04-24 21:38:19.666076] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:22:57.021 [2024-04-24 21:38:19.666120] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2953969 ] 00:22:57.021 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.021 [2024-04-24 21:38:19.736813] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.021 [2024-04-24 21:38:19.807115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.588 21:38:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:57.588 21:38:20 -- common/autotest_common.sh@850 -- # return 0 00:22:57.588 21:38:20 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.588 21:38:20 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:57.588 21:38:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.588 21:38:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.588 21:38:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.588 21:38:20 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:57.588 21:38:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.588 21:38:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.588 21:38:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.588 21:38:20 -- host/discovery.sh@72 -- # notify_id=0 00:22:57.588 21:38:20 -- host/discovery.sh@83 -- # get_subsystem_names 00:22:57.588 21:38:20 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:57.588 21:38:20 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:57.588 21:38:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.588 21:38:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.588 21:38:20 -- host/discovery.sh@59 -- # sort 00:22:57.588 21:38:20 -- host/discovery.sh@59 -- # xargs 00:22:57.588 21:38:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.847 21:38:20 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:57.847 21:38:20 -- host/discovery.sh@84 -- # get_bdev_list 00:22:57.847 21:38:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.847 21:38:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.847 21:38:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.847 21:38:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:57.847 21:38:20 -- host/discovery.sh@55 -- # sort 00:22:57.847 21:38:20 -- host/discovery.sh@55 -- # xargs 00:22:57.847 21:38:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.847 21:38:20 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:57.847 21:38:20 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:57.847 21:38:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.847 21:38:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.847 21:38:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.847 21:38:20 -- host/discovery.sh@87 -- # get_subsystem_names 00:22:57.847 21:38:20 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:57.847 21:38:20 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:57.847 21:38:20 -- host/discovery.sh@59 -- # sort 00:22:57.847 21:38:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.847 21:38:20 -- host/discovery.sh@59 -- # xargs 00:22:57.847 21:38:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.847 21:38:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.847 21:38:20 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:57.847 21:38:20 -- host/discovery.sh@88 -- # get_bdev_list 00:22:57.847 21:38:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.847 21:38:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:57.847 21:38:20 -- host/discovery.sh@55 -- # sort 00:22:57.847 21:38:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.847 21:38:20 -- host/discovery.sh@55 -- # xargs 00:22:57.847 21:38:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.847 21:38:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.847 21:38:20 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:57.847 21:38:20 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:57.847 21:38:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.847 21:38:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.847 21:38:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.847 21:38:20 -- host/discovery.sh@91 -- # get_subsystem_names 00:22:57.847 21:38:20 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:57.847 21:38:20 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:57.847 21:38:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.847 21:38:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.847 21:38:20 -- host/discovery.sh@59 -- # sort 00:22:57.847 21:38:20 -- host/discovery.sh@59 -- # xargs 00:22:57.847 21:38:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.847 21:38:20 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:57.847 21:38:20 -- host/discovery.sh@92 -- # get_bdev_list 00:22:57.847 21:38:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.847 21:38:20 -- host/discovery.sh@55 -- # xargs 00:22:57.847 21:38:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:57.847 21:38:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.847 21:38:20 -- common/autotest_common.sh@10 -- # set +x 00:22:57.847 21:38:20 -- host/discovery.sh@55 -- # sort 00:22:57.847 21:38:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.106 21:38:20 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:58.106 21:38:20 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:58.106 21:38:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.106 21:38:20 -- common/autotest_common.sh@10 -- # set +x 00:22:58.106 [2024-04-24 21:38:20.771709] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.106 21:38:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.106 21:38:20 -- host/discovery.sh@97 -- # get_subsystem_names 00:22:58.106 21:38:20 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:58.106 21:38:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.106 21:38:20 -- common/autotest_common.sh@10 -- # set +x 00:22:58.106 21:38:20 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:58.106 21:38:20 -- host/discovery.sh@59 -- # sort 00:22:58.106 21:38:20 -- host/discovery.sh@59 -- # xargs 00:22:58.106 21:38:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.106 21:38:20 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:58.106 21:38:20 -- host/discovery.sh@98 -- # get_bdev_list 00:22:58.106 21:38:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.106 21:38:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:58.106 21:38:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.106 21:38:20 -- host/discovery.sh@55 -- # sort 00:22:58.106 21:38:20 -- common/autotest_common.sh@10 -- # set +x 00:22:58.106 21:38:20 -- host/discovery.sh@55 -- # xargs 00:22:58.106 21:38:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.106 21:38:20 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:58.106 21:38:20 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:58.106 21:38:20 -- host/discovery.sh@79 -- # expected_count=0 00:22:58.106 21:38:20 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:58.106 21:38:20 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:58.106 21:38:20 -- common/autotest_common.sh@901 -- # local max=10 00:22:58.106 21:38:20 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:58.106 21:38:20 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:58.106 21:38:20 -- common/autotest_common.sh@903 -- # get_notification_count 00:22:58.106 21:38:20 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:58.106 21:38:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.106 21:38:20 -- common/autotest_common.sh@10 -- # set +x 00:22:58.106 21:38:20 -- host/discovery.sh@74 -- # jq '. | length' 00:22:58.106 21:38:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.106 21:38:20 -- host/discovery.sh@74 -- # notification_count=0 00:22:58.106 21:38:20 -- host/discovery.sh@75 -- # notify_id=0 00:22:58.106 21:38:20 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:22:58.106 21:38:20 -- common/autotest_common.sh@904 -- # return 0 00:22:58.106 21:38:20 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:58.106 21:38:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.107 21:38:20 -- common/autotest_common.sh@10 -- # set +x 00:22:58.107 21:38:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.107 21:38:20 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:58.107 21:38:20 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:58.107 21:38:20 -- common/autotest_common.sh@901 -- # local max=10 00:22:58.107 21:38:20 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:58.107 21:38:20 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:58.107 21:38:20 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:22:58.107 21:38:20 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:58.107 21:38:20 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:58.107 21:38:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.107 21:38:20 -- common/autotest_common.sh@10 -- # set +x 00:22:58.107 21:38:20 -- host/discovery.sh@59 -- # sort 00:22:58.107 21:38:20 -- host/discovery.sh@59 -- # xargs 00:22:58.107 21:38:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.107 21:38:20 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:22:58.107 21:38:20 -- common/autotest_common.sh@906 -- # sleep 1 00:22:58.674 [2024-04-24 21:38:21.508708] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:58.674 [2024-04-24 21:38:21.508729] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:58.674 [2024-04-24 21:38:21.508745] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:58.934 [2024-04-24 21:38:21.598025] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:58.934 [2024-04-24 21:38:21.657476] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:58.934 [2024-04-24 21:38:21.657495] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:59.193 21:38:21 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:59.193 21:38:21 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:59.193 21:38:21 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:22:59.193 21:38:21 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:59.193 21:38:21 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:59.193 21:38:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.193 21:38:21 -- common/autotest_common.sh@10 -- # set +x 00:22:59.193 21:38:21 -- host/discovery.sh@59 -- # sort 00:22:59.193 21:38:21 -- host/discovery.sh@59 -- # xargs 00:22:59.193 21:38:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.193 21:38:22 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.193 21:38:22 -- common/autotest_common.sh@904 -- # return 0 00:22:59.193 21:38:22 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:59.193 21:38:22 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:59.193 21:38:22 -- common/autotest_common.sh@901 -- # local max=10 00:22:59.193 21:38:22 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:59.193 21:38:22 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:59.193 21:38:22 -- common/autotest_common.sh@903 -- # get_bdev_list 00:22:59.193 21:38:22 -- host/discovery.sh@55 -- # sort 00:22:59.193 21:38:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.193 21:38:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:59.193 21:38:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.193 21:38:22 -- common/autotest_common.sh@10 -- # set +x 00:22:59.193 21:38:22 -- host/discovery.sh@55 -- # xargs 00:22:59.193 21:38:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.193 21:38:22 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:59.193 21:38:22 -- common/autotest_common.sh@904 -- # return 0 00:22:59.193 21:38:22 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:59.194 21:38:22 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:59.194 21:38:22 -- common/autotest_common.sh@901 -- # local max=10 00:22:59.194 21:38:22 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:59.194 21:38:22 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:59.194 21:38:22 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:22:59.194 21:38:22 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:59.194 21:38:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.194 21:38:22 -- common/autotest_common.sh@10 -- # set +x 00:22:59.194 21:38:22 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:59.194 21:38:22 -- host/discovery.sh@63 -- # sort -n 00:22:59.194 21:38:22 -- host/discovery.sh@63 -- # xargs 00:22:59.194 21:38:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.452 21:38:22 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:22:59.452 21:38:22 -- common/autotest_common.sh@904 -- # return 0 00:22:59.452 21:38:22 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:59.452 21:38:22 -- host/discovery.sh@79 -- # expected_count=1 00:22:59.452 21:38:22 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:59.452 21:38:22 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:59.452 21:38:22 -- common/autotest_common.sh@901 -- # local max=10 00:22:59.452 21:38:22 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:59.452 21:38:22 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:59.452 21:38:22 -- common/autotest_common.sh@903 -- # get_notification_count 00:22:59.452 21:38:22 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:59.452 21:38:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.452 21:38:22 -- common/autotest_common.sh@10 -- # set +x 00:22:59.452 21:38:22 -- host/discovery.sh@74 -- # jq '. | length' 00:22:59.452 21:38:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.452 21:38:22 -- host/discovery.sh@74 -- # notification_count=1 00:22:59.452 21:38:22 -- host/discovery.sh@75 -- # notify_id=1 00:22:59.452 21:38:22 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:22:59.452 21:38:22 -- common/autotest_common.sh@904 -- # return 0 00:22:59.452 21:38:22 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:59.452 21:38:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.453 21:38:22 -- common/autotest_common.sh@10 -- # set +x 00:22:59.453 21:38:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.453 21:38:22 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:59.453 21:38:22 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:59.453 21:38:22 -- common/autotest_common.sh@901 -- # local max=10 00:22:59.453 21:38:22 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:59.453 21:38:22 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:59.453 21:38:22 -- common/autotest_common.sh@903 -- # get_bdev_list 00:22:59.453 21:38:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.453 21:38:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:59.453 21:38:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.453 21:38:22 -- common/autotest_common.sh@10 -- # set +x 00:22:59.453 21:38:22 -- host/discovery.sh@55 -- # sort 00:22:59.453 21:38:22 -- host/discovery.sh@55 -- # xargs 00:22:59.712 21:38:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.712 21:38:22 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:59.712 21:38:22 -- common/autotest_common.sh@904 -- # return 0 00:22:59.712 21:38:22 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:59.712 21:38:22 -- host/discovery.sh@79 -- # expected_count=1 00:22:59.712 21:38:22 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:59.712 21:38:22 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:59.712 21:38:22 -- common/autotest_common.sh@901 -- # local max=10 00:22:59.712 21:38:22 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:59.712 21:38:22 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:59.712 21:38:22 -- common/autotest_common.sh@903 -- # get_notification_count 00:22:59.712 21:38:22 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:59.712 21:38:22 -- host/discovery.sh@74 -- # jq '. | length' 00:22:59.712 21:38:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.712 21:38:22 -- common/autotest_common.sh@10 -- # set +x 00:22:59.712 21:38:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.712 21:38:22 -- host/discovery.sh@74 -- # notification_count=1 00:22:59.712 21:38:22 -- host/discovery.sh@75 -- # notify_id=2 00:22:59.712 21:38:22 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:22:59.712 21:38:22 -- common/autotest_common.sh@904 -- # return 0 00:22:59.712 21:38:22 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:59.712 21:38:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.712 21:38:22 -- common/autotest_common.sh@10 -- # set +x 00:22:59.712 [2024-04-24 21:38:22.520473] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:59.712 [2024-04-24 21:38:22.520884] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:59.712 [2024-04-24 21:38:22.520907] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:59.712 21:38:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.713 21:38:22 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:59.713 21:38:22 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:59.713 21:38:22 -- common/autotest_common.sh@901 -- # local max=10 00:22:59.713 21:38:22 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:59.713 21:38:22 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:59.713 21:38:22 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:22:59.713 21:38:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:59.713 21:38:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.713 21:38:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:59.713 21:38:22 -- common/autotest_common.sh@10 -- # set +x 00:22:59.713 21:38:22 -- host/discovery.sh@59 -- # sort 00:22:59.713 21:38:22 -- host/discovery.sh@59 -- # xargs 00:22:59.713 21:38:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.713 21:38:22 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.713 21:38:22 -- common/autotest_common.sh@904 -- # return 0 00:22:59.713 21:38:22 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:59.713 21:38:22 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:59.713 21:38:22 -- common/autotest_common.sh@901 -- # local max=10 00:22:59.713 21:38:22 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:59.713 21:38:22 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:59.713 21:38:22 -- common/autotest_common.sh@903 -- # get_bdev_list 00:22:59.713 21:38:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:59.713 21:38:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.713 21:38:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.713 21:38:22 -- common/autotest_common.sh@10 -- # set +x 00:22:59.713 21:38:22 -- host/discovery.sh@55 -- # sort 00:22:59.713 21:38:22 -- host/discovery.sh@55 -- # xargs 00:22:59.972 [2024-04-24 21:38:22.607424] bdev_nvme.c:6843:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:59.972 21:38:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.972 21:38:22 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:59.972 21:38:22 -- common/autotest_common.sh@904 -- # return 0 00:22:59.972 21:38:22 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:59.972 21:38:22 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:59.972 21:38:22 -- common/autotest_common.sh@901 -- # local max=10 00:22:59.972 21:38:22 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:59.972 21:38:22 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:59.972 21:38:22 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:22:59.972 21:38:22 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:59.972 21:38:22 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:59.972 21:38:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.972 21:38:22 -- host/discovery.sh@63 -- # sort -n 00:22:59.972 21:38:22 -- common/autotest_common.sh@10 -- # set +x 00:22:59.972 21:38:22 -- host/discovery.sh@63 -- # xargs 00:22:59.972 21:38:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.972 [2024-04-24 21:38:22.671127] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:59.972 [2024-04-24 21:38:22.671144] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:59.972 [2024-04-24 21:38:22.671151] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:59.972 21:38:22 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:59.972 21:38:22 -- common/autotest_common.sh@906 -- # sleep 1 00:23:00.907 21:38:23 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:00.907 21:38:23 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:00.907 21:38:23 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:00.907 21:38:23 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:00.907 21:38:23 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:00.907 21:38:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.907 21:38:23 -- common/autotest_common.sh@10 -- # set +x 00:23:00.907 21:38:23 -- host/discovery.sh@63 -- # sort -n 00:23:00.907 21:38:23 -- host/discovery.sh@63 -- # xargs 00:23:00.907 21:38:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.907 21:38:23 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:00.907 21:38:23 -- common/autotest_common.sh@904 -- # return 0 00:23:00.907 21:38:23 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:00.907 21:38:23 -- host/discovery.sh@79 -- # expected_count=0 00:23:00.907 21:38:23 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:00.907 21:38:23 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:00.907 21:38:23 -- common/autotest_common.sh@901 -- # local max=10 00:23:00.907 21:38:23 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:00.907 21:38:23 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:00.907 21:38:23 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:00.907 21:38:23 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:00.907 21:38:23 -- host/discovery.sh@74 -- # jq '. | length' 00:23:00.907 21:38:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.907 21:38:23 -- common/autotest_common.sh@10 -- # set +x 00:23:00.907 21:38:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.907 21:38:23 -- host/discovery.sh@74 -- # notification_count=0 00:23:00.907 21:38:23 -- host/discovery.sh@75 -- # notify_id=2 00:23:00.907 21:38:23 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:00.907 21:38:23 -- common/autotest_common.sh@904 -- # return 0 00:23:00.907 21:38:23 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:00.907 21:38:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.907 21:38:23 -- common/autotest_common.sh@10 -- # set +x 00:23:00.907 [2024-04-24 21:38:23.780215] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:00.907 [2024-04-24 21:38:23.780236] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:00.907 21:38:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.907 21:38:23 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:00.907 21:38:23 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:00.907 21:38:23 -- common/autotest_common.sh@901 -- # local max=10 00:23:00.907 21:38:23 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:00.907 21:38:23 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:00.907 [2024-04-24 21:38:23.787948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.907 [2024-04-24 21:38:23.787968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.907 [2024-04-24 21:38:23.787980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.907 [2024-04-24 21:38:23.787990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.907 [2024-04-24 21:38:23.788001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.907 [2024-04-24 21:38:23.788011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.907 [2024-04-24 21:38:23.788021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.907 [2024-04-24 21:38:23.788031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.907 [2024-04-24 21:38:23.788040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1660a20 is same with the state(5) to be set 00:23:00.907 21:38:23 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:00.907 21:38:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:00.908 21:38:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:00.908 21:38:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.908 21:38:23 -- common/autotest_common.sh@10 -- # set +x 00:23:00.908 21:38:23 -- host/discovery.sh@59 -- # sort 00:23:00.908 21:38:23 -- host/discovery.sh@59 -- # xargs 00:23:01.166 [2024-04-24 21:38:23.797964] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1660a20 (9): Bad file descriptor 00:23:01.166 21:38:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.166 [2024-04-24 21:38:23.808000] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.166 [2024-04-24 21:38:23.808551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.166 [2024-04-24 21:38:23.808889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.166 [2024-04-24 21:38:23.808902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1660a20 with addr=10.0.0.2, port=4420 00:23:01.166 [2024-04-24 21:38:23.808913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1660a20 is same with the state(5) to be set 00:23:01.166 [2024-04-24 21:38:23.808926] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1660a20 (9): Bad file descriptor 00:23:01.166 [2024-04-24 21:38:23.808955] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.166 [2024-04-24 21:38:23.808965] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.166 [2024-04-24 21:38:23.808975] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.166 [2024-04-24 21:38:23.808991] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.166 [2024-04-24 21:38:23.818057] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.166 [2024-04-24 21:38:23.818575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.166 [2024-04-24 21:38:23.818995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.167 [2024-04-24 21:38:23.819007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1660a20 with addr=10.0.0.2, port=4420 00:23:01.167 [2024-04-24 21:38:23.819017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1660a20 is same with the state(5) to be set 00:23:01.167 [2024-04-24 21:38:23.819030] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1660a20 (9): Bad file descriptor 00:23:01.167 [2024-04-24 21:38:23.819049] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.167 [2024-04-24 21:38:23.819059] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.167 [2024-04-24 21:38:23.819068] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.167 [2024-04-24 21:38:23.819080] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.167 [2024-04-24 21:38:23.828110] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.167 [2024-04-24 21:38:23.828557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.167 [2024-04-24 21:38:23.829034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.167 [2024-04-24 21:38:23.829046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1660a20 with addr=10.0.0.2, port=4420 00:23:01.167 [2024-04-24 21:38:23.829056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1660a20 is same with the state(5) to be set 00:23:01.167 [2024-04-24 21:38:23.829069] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1660a20 (9): Bad file descriptor 00:23:01.167 [2024-04-24 21:38:23.829095] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.167 [2024-04-24 21:38:23.829104] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.167 [2024-04-24 21:38:23.829114] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.167 [2024-04-24 21:38:23.829126] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.167 21:38:23 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.167 21:38:23 -- common/autotest_common.sh@904 -- # return 0 00:23:01.167 [2024-04-24 21:38:23.838165] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.167 21:38:23 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:01.167 21:38:23 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:01.167 [2024-04-24 21:38:23.838582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.167 21:38:23 -- common/autotest_common.sh@901 -- # local max=10 00:23:01.167 [2024-04-24 21:38:23.838925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.167 [2024-04-24 21:38:23.838940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1660a20 with addr=10.0.0.2, port=4420 00:23:01.167 [2024-04-24 21:38:23.838951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1660a20 is same with the state(5) to be set 00:23:01.167 [2024-04-24 21:38:23.838964] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1660a20 (9): Bad file descriptor 00:23:01.167 [2024-04-24 21:38:23.838976] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.167 [2024-04-24 21:38:23.838988] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.167 21:38:23 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:01.167 [2024-04-24 21:38:23.838997] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.167 [2024-04-24 21:38:23.839009] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.167 21:38:23 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:01.167 21:38:23 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:01.167 21:38:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:01.167 21:38:23 -- host/discovery.sh@55 -- # xargs 00:23:01.167 21:38:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.167 21:38:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:01.167 21:38:23 -- common/autotest_common.sh@10 -- # set +x 00:23:01.167 21:38:23 -- host/discovery.sh@55 -- # sort 00:23:01.167 [2024-04-24 21:38:23.848217] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.167 [2024-04-24 21:38:23.848635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.167 [2024-04-24 21:38:23.848979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.167 [2024-04-24 21:38:23.848991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1660a20 with addr=10.0.0.2, port=4420 00:23:01.167 [2024-04-24 21:38:23.849001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1660a20 is same with the state(5) to be set 00:23:01.167 [2024-04-24 21:38:23.849013] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1660a20 (9): Bad file descriptor 00:23:01.167 [2024-04-24 21:38:23.849025] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.167 [2024-04-24 21:38:23.849034] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.167 [2024-04-24 21:38:23.849043] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.167 [2024-04-24 21:38:23.849054] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.167 [2024-04-24 21:38:23.858270] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.167 [2024-04-24 21:38:23.858709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.167 [2024-04-24 21:38:23.859051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.167 [2024-04-24 21:38:23.859063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1660a20 with addr=10.0.0.2, port=4420 00:23:01.167 [2024-04-24 21:38:23.859073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1660a20 is same with the state(5) to be set 00:23:01.167 [2024-04-24 21:38:23.859085] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1660a20 (9): Bad file descriptor 00:23:01.167 [2024-04-24 21:38:23.859105] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.167 [2024-04-24 21:38:23.859114] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.167 [2024-04-24 21:38:23.859123] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.167 [2024-04-24 21:38:23.859135] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.167 [2024-04-24 21:38:23.868323] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.167 [2024-04-24 21:38:23.868840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.167 [2024-04-24 21:38:23.869262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.167 [2024-04-24 21:38:23.869274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1660a20 with addr=10.0.0.2, port=4420 00:23:01.167 [2024-04-24 21:38:23.869287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1660a20 is same with the state(5) to be set 00:23:01.167 [2024-04-24 21:38:23.869299] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1660a20 (9): Bad file descriptor 00:23:01.167 [2024-04-24 21:38:23.869327] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.167 [2024-04-24 21:38:23.869337] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.167 [2024-04-24 21:38:23.869346] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.167 [2024-04-24 21:38:23.869357] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.167 21:38:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.167 [2024-04-24 21:38:23.878377] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.167 [2024-04-24 21:38:23.878856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.167 [2024-04-24 21:38:23.879283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.167 [2024-04-24 21:38:23.879294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1660a20 with addr=10.0.0.2, port=4420 00:23:01.167 [2024-04-24 21:38:23.879303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1660a20 is same with the state(5) to be set 00:23:01.167 [2024-04-24 21:38:23.879315] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1660a20 (9): Bad file descriptor 00:23:01.167 [2024-04-24 21:38:23.879327] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.167 [2024-04-24 21:38:23.879335] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.167 [2024-04-24 21:38:23.879344] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.167 [2024-04-24 21:38:23.879355] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.167 [2024-04-24 21:38:23.888429] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.167 [2024-04-24 21:38:23.888855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.167 21:38:23 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:01.167 21:38:23 -- common/autotest_common.sh@904 -- # return 0 00:23:01.167 [2024-04-24 21:38:23.889311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.167 [2024-04-24 21:38:23.889325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1660a20 with addr=10.0.0.2, port=4420 00:23:01.167 [2024-04-24 21:38:23.889335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1660a20 is same with the state(5) to be set 00:23:01.167 [2024-04-24 21:38:23.889347] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1660a20 (9): Bad file descriptor 00:23:01.167 [2024-04-24 21:38:23.889360] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.167 [2024-04-24 21:38:23.889368] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.167 [2024-04-24 21:38:23.889377] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.167 [2024-04-24 21:38:23.889396] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.167 21:38:23 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:01.167 21:38:23 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:01.167 21:38:23 -- common/autotest_common.sh@901 -- # local max=10 00:23:01.167 21:38:23 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:01.167 21:38:23 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:01.167 21:38:23 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:01.168 21:38:23 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:01.168 21:38:23 -- host/discovery.sh@63 -- # sort -n 00:23:01.168 21:38:23 -- host/discovery.sh@63 -- # xargs 00:23:01.168 21:38:23 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:01.168 21:38:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.168 21:38:23 -- common/autotest_common.sh@10 -- # set +x 00:23:01.168 [2024-04-24 21:38:23.898483] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.168 [2024-04-24 21:38:23.898926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.168 [2024-04-24 21:38:23.899328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.168 [2024-04-24 21:38:23.899339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1660a20 with addr=10.0.0.2, port=4420 00:23:01.168 [2024-04-24 21:38:23.899349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1660a20 is same with the state(5) to be set 00:23:01.168 [2024-04-24 21:38:23.899362] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1660a20 (9): Bad file descriptor 00:23:01.168 [2024-04-24 21:38:23.899374] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.168 [2024-04-24 21:38:23.899383] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.168 [2024-04-24 21:38:23.899392] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.168 [2024-04-24 21:38:23.899403] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.168 [2024-04-24 21:38:23.908535] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:01.168 [2024-04-24 21:38:23.909000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.168 [2024-04-24 21:38:23.909404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.168 [2024-04-24 21:38:23.909416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1660a20 with addr=10.0.0.2, port=4420 00:23:01.168 [2024-04-24 21:38:23.909426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1660a20 is same with the state(5) to be set 00:23:01.168 [2024-04-24 21:38:23.909459] bdev_nvme.c:6706:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:01.168 [2024-04-24 21:38:23.909474] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:01.168 21:38:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.168 [2024-04-24 21:38:23.909497] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1660a20 (9): Bad file descriptor 00:23:01.168 [2024-04-24 21:38:23.909523] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.168 [2024-04-24 21:38:23.909534] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.168 [2024-04-24 21:38:23.909544] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.168 [2024-04-24 21:38:23.909555] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.168 21:38:23 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:23:01.168 21:38:23 -- common/autotest_common.sh@906 -- # sleep 1 00:23:02.102 21:38:24 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.102 21:38:24 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:02.102 21:38:24 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:02.102 21:38:24 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:02.102 21:38:24 -- host/discovery.sh@63 -- # xargs 00:23:02.102 21:38:24 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:02.102 21:38:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.102 21:38:24 -- host/discovery.sh@63 -- # sort -n 00:23:02.102 21:38:24 -- common/autotest_common.sh@10 -- # set +x 00:23:02.102 21:38:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.360 21:38:24 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:23:02.360 21:38:24 -- common/autotest_common.sh@904 -- # return 0 00:23:02.360 21:38:24 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:02.360 21:38:24 -- host/discovery.sh@79 -- # expected_count=0 00:23:02.360 21:38:24 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:02.360 21:38:24 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:02.360 21:38:24 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.360 21:38:24 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.360 21:38:24 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:02.360 21:38:24 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:02.360 21:38:24 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:02.360 21:38:24 -- host/discovery.sh@74 -- # jq '. | length' 00:23:02.360 21:38:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.360 21:38:25 -- common/autotest_common.sh@10 -- # set +x 00:23:02.360 21:38:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.360 21:38:25 -- host/discovery.sh@74 -- # notification_count=0 00:23:02.360 21:38:25 -- host/discovery.sh@75 -- # notify_id=2 00:23:02.360 21:38:25 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:02.360 21:38:25 -- common/autotest_common.sh@904 -- # return 0 00:23:02.360 21:38:25 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:02.360 21:38:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.360 21:38:25 -- common/autotest_common.sh@10 -- # set +x 00:23:02.360 21:38:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.360 21:38:25 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:02.360 21:38:25 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:02.360 21:38:25 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.360 21:38:25 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.360 21:38:25 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:02.360 21:38:25 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:02.360 21:38:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:02.360 21:38:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:02.360 21:38:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.360 21:38:25 -- host/discovery.sh@59 -- # xargs 00:23:02.360 21:38:25 -- common/autotest_common.sh@10 -- # set +x 00:23:02.360 21:38:25 -- host/discovery.sh@59 -- # sort 00:23:02.360 21:38:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.360 21:38:25 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:02.360 21:38:25 -- common/autotest_common.sh@904 -- # return 0 00:23:02.360 21:38:25 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:02.360 21:38:25 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:02.360 21:38:25 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.360 21:38:25 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.360 21:38:25 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:02.360 21:38:25 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:02.360 21:38:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.360 21:38:25 -- host/discovery.sh@55 -- # xargs 00:23:02.360 21:38:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:02.360 21:38:25 -- host/discovery.sh@55 -- # sort 00:23:02.360 21:38:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.360 21:38:25 -- common/autotest_common.sh@10 -- # set +x 00:23:02.360 21:38:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.360 21:38:25 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:02.360 21:38:25 -- common/autotest_common.sh@904 -- # return 0 00:23:02.360 21:38:25 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:02.360 21:38:25 -- host/discovery.sh@79 -- # expected_count=2 00:23:02.360 21:38:25 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:02.360 21:38:25 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:02.360 21:38:25 -- common/autotest_common.sh@901 -- # local max=10 00:23:02.360 21:38:25 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:02.360 21:38:25 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:02.360 21:38:25 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:02.360 21:38:25 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:02.360 21:38:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.360 21:38:25 -- common/autotest_common.sh@10 -- # set +x 00:23:02.360 21:38:25 -- host/discovery.sh@74 -- # jq '. | length' 00:23:02.360 21:38:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.360 21:38:25 -- host/discovery.sh@74 -- # notification_count=2 00:23:02.360 21:38:25 -- host/discovery.sh@75 -- # notify_id=4 00:23:02.360 21:38:25 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:02.360 21:38:25 -- common/autotest_common.sh@904 -- # return 0 00:23:02.360 21:38:25 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:02.360 21:38:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.360 21:38:25 -- common/autotest_common.sh@10 -- # set +x 00:23:03.735 [2024-04-24 21:38:26.270325] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:03.735 [2024-04-24 21:38:26.270343] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:03.735 [2024-04-24 21:38:26.270357] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:03.735 [2024-04-24 21:38:26.358626] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:03.995 [2024-04-24 21:38:26.627003] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:03.995 [2024-04-24 21:38:26.627029] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:03.995 21:38:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.995 21:38:26 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:03.995 21:38:26 -- common/autotest_common.sh@638 -- # local es=0 00:23:03.995 21:38:26 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:03.995 21:38:26 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:03.995 21:38:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:03.995 21:38:26 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:03.995 21:38:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:03.995 21:38:26 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:03.995 21:38:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.995 21:38:26 -- common/autotest_common.sh@10 -- # set +x 00:23:03.995 request: 00:23:03.995 { 00:23:03.995 "name": "nvme", 00:23:03.995 "trtype": "tcp", 00:23:03.995 "traddr": "10.0.0.2", 00:23:03.995 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:03.995 "adrfam": "ipv4", 00:23:03.995 "trsvcid": "8009", 00:23:03.995 "wait_for_attach": true, 00:23:03.995 "method": "bdev_nvme_start_discovery", 00:23:03.995 "req_id": 1 00:23:03.995 } 00:23:03.995 Got JSON-RPC error response 00:23:03.995 response: 00:23:03.995 { 00:23:03.995 "code": -17, 00:23:03.995 "message": "File exists" 00:23:03.995 } 00:23:03.995 21:38:26 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:03.995 21:38:26 -- common/autotest_common.sh@641 -- # es=1 00:23:03.995 21:38:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:03.995 21:38:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:03.995 21:38:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:03.995 21:38:26 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:03.995 21:38:26 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:03.995 21:38:26 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:03.995 21:38:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.995 21:38:26 -- host/discovery.sh@67 -- # sort 00:23:03.995 21:38:26 -- common/autotest_common.sh@10 -- # set +x 00:23:03.995 21:38:26 -- host/discovery.sh@67 -- # xargs 00:23:03.995 21:38:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.995 21:38:26 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:03.995 21:38:26 -- host/discovery.sh@146 -- # get_bdev_list 00:23:03.995 21:38:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.995 21:38:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:03.995 21:38:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.995 21:38:26 -- common/autotest_common.sh@10 -- # set +x 00:23:03.995 21:38:26 -- host/discovery.sh@55 -- # sort 00:23:03.995 21:38:26 -- host/discovery.sh@55 -- # xargs 00:23:03.995 21:38:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.995 21:38:26 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:03.995 21:38:26 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:03.995 21:38:26 -- common/autotest_common.sh@638 -- # local es=0 00:23:03.995 21:38:26 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:03.995 21:38:26 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:03.995 21:38:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:03.995 21:38:26 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:03.995 21:38:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:03.995 21:38:26 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:03.995 21:38:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.995 21:38:26 -- common/autotest_common.sh@10 -- # set +x 00:23:03.995 request: 00:23:03.995 { 00:23:03.995 "name": "nvme_second", 00:23:03.995 "trtype": "tcp", 00:23:03.995 "traddr": "10.0.0.2", 00:23:03.995 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:03.995 "adrfam": "ipv4", 00:23:03.995 "trsvcid": "8009", 00:23:03.995 "wait_for_attach": true, 00:23:03.995 "method": "bdev_nvme_start_discovery", 00:23:03.995 "req_id": 1 00:23:03.995 } 00:23:03.995 Got JSON-RPC error response 00:23:03.995 response: 00:23:03.995 { 00:23:03.995 "code": -17, 00:23:03.995 "message": "File exists" 00:23:03.995 } 00:23:03.995 21:38:26 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:03.995 21:38:26 -- common/autotest_common.sh@641 -- # es=1 00:23:03.995 21:38:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:03.995 21:38:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:03.995 21:38:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:03.995 21:38:26 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:03.995 21:38:26 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:03.995 21:38:26 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:03.995 21:38:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.995 21:38:26 -- common/autotest_common.sh@10 -- # set +x 00:23:03.995 21:38:26 -- host/discovery.sh@67 -- # sort 00:23:03.995 21:38:26 -- host/discovery.sh@67 -- # xargs 00:23:03.995 21:38:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.995 21:38:26 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:03.995 21:38:26 -- host/discovery.sh@152 -- # get_bdev_list 00:23:03.995 21:38:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.995 21:38:26 -- host/discovery.sh@55 -- # xargs 00:23:03.995 21:38:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:03.995 21:38:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.995 21:38:26 -- common/autotest_common.sh@10 -- # set +x 00:23:03.995 21:38:26 -- host/discovery.sh@55 -- # sort 00:23:03.995 21:38:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.995 21:38:26 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:03.995 21:38:26 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:03.995 21:38:26 -- common/autotest_common.sh@638 -- # local es=0 00:23:03.995 21:38:26 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:03.995 21:38:26 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:03.995 21:38:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:03.995 21:38:26 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:03.995 21:38:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:03.995 21:38:26 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:03.995 21:38:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.995 21:38:26 -- common/autotest_common.sh@10 -- # set +x 00:23:05.369 [2024-04-24 21:38:27.850628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.369 [2024-04-24 21:38:27.851098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.369 [2024-04-24 21:38:27.851112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16727a0 with addr=10.0.0.2, port=8010 00:23:05.369 [2024-04-24 21:38:27.851127] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:05.370 [2024-04-24 21:38:27.851135] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:05.370 [2024-04-24 21:38:27.851144] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:06.305 [2024-04-24 21:38:28.852931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.305 [2024-04-24 21:38:28.853355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.305 [2024-04-24 21:38:28.853368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1666750 with addr=10.0.0.2, port=8010 00:23:06.305 [2024-04-24 21:38:28.853381] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:06.305 [2024-04-24 21:38:28.853390] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:06.305 [2024-04-24 21:38:28.853398] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:07.248 [2024-04-24 21:38:29.854970] bdev_nvme.c:6962:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:07.248 request: 00:23:07.248 { 00:23:07.248 "name": "nvme_second", 00:23:07.248 "trtype": "tcp", 00:23:07.248 "traddr": "10.0.0.2", 00:23:07.248 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:07.248 "adrfam": "ipv4", 00:23:07.248 "trsvcid": "8010", 00:23:07.248 "attach_timeout_ms": 3000, 00:23:07.248 "method": "bdev_nvme_start_discovery", 00:23:07.248 "req_id": 1 00:23:07.248 } 00:23:07.248 Got JSON-RPC error response 00:23:07.248 response: 00:23:07.248 { 00:23:07.248 "code": -110, 00:23:07.248 "message": "Connection timed out" 00:23:07.248 } 00:23:07.248 21:38:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:07.248 21:38:29 -- common/autotest_common.sh@641 -- # es=1 00:23:07.248 21:38:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:07.248 21:38:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:07.248 21:38:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:07.248 21:38:29 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:07.248 21:38:29 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:07.248 21:38:29 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:07.248 21:38:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.248 21:38:29 -- common/autotest_common.sh@10 -- # set +x 00:23:07.248 21:38:29 -- host/discovery.sh@67 -- # sort 00:23:07.248 21:38:29 -- host/discovery.sh@67 -- # xargs 00:23:07.248 21:38:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.248 21:38:29 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:07.248 21:38:29 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:07.248 21:38:29 -- host/discovery.sh@161 -- # kill 2953969 00:23:07.248 21:38:29 -- host/discovery.sh@162 -- # nvmftestfini 00:23:07.248 21:38:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:07.248 21:38:29 -- nvmf/common.sh@117 -- # sync 00:23:07.248 21:38:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:07.248 21:38:29 -- nvmf/common.sh@120 -- # set +e 00:23:07.248 21:38:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:07.248 21:38:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:07.248 rmmod nvme_tcp 00:23:07.248 rmmod nvme_fabrics 00:23:07.248 rmmod nvme_keyring 00:23:07.248 21:38:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:07.248 21:38:29 -- nvmf/common.sh@124 -- # set -e 00:23:07.248 21:38:29 -- nvmf/common.sh@125 -- # return 0 00:23:07.248 21:38:29 -- nvmf/common.sh@478 -- # '[' -n 2953698 ']' 00:23:07.248 21:38:29 -- nvmf/common.sh@479 -- # killprocess 2953698 00:23:07.248 21:38:29 -- common/autotest_common.sh@936 -- # '[' -z 2953698 ']' 00:23:07.248 21:38:29 -- common/autotest_common.sh@940 -- # kill -0 2953698 00:23:07.248 21:38:29 -- common/autotest_common.sh@941 -- # uname 00:23:07.248 21:38:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:07.248 21:38:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2953698 00:23:07.248 21:38:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:07.248 21:38:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:07.248 21:38:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2953698' 00:23:07.248 killing process with pid 2953698 00:23:07.248 21:38:30 -- common/autotest_common.sh@955 -- # kill 2953698 00:23:07.248 21:38:30 -- common/autotest_common.sh@960 -- # wait 2953698 00:23:07.507 21:38:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:07.507 21:38:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:07.507 21:38:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:07.507 21:38:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:07.507 21:38:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:07.507 21:38:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.507 21:38:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.507 21:38:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.040 21:38:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:10.040 00:23:10.040 real 0m20.261s 00:23:10.040 user 0m24.583s 00:23:10.040 sys 0m7.163s 00:23:10.040 21:38:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:10.040 21:38:32 -- common/autotest_common.sh@10 -- # set +x 00:23:10.040 ************************************ 00:23:10.040 END TEST nvmf_discovery 00:23:10.040 ************************************ 00:23:10.040 21:38:32 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:10.040 21:38:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:10.040 21:38:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:10.040 21:38:32 -- common/autotest_common.sh@10 -- # set +x 00:23:10.040 ************************************ 00:23:10.040 START TEST nvmf_discovery_remove_ifc 00:23:10.040 ************************************ 00:23:10.040 21:38:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:10.040 * Looking for test storage... 00:23:10.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:10.040 21:38:32 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:10.040 21:38:32 -- nvmf/common.sh@7 -- # uname -s 00:23:10.040 21:38:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.040 21:38:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.040 21:38:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.040 21:38:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.040 21:38:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.040 21:38:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.040 21:38:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.040 21:38:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.040 21:38:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.040 21:38:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.040 21:38:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:10.040 21:38:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:10.040 21:38:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.040 21:38:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.040 21:38:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:10.040 21:38:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:10.040 21:38:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:10.040 21:38:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.040 21:38:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.040 21:38:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.040 21:38:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.040 21:38:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.040 21:38:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.040 21:38:32 -- paths/export.sh@5 -- # export PATH 00:23:10.040 21:38:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.040 21:38:32 -- nvmf/common.sh@47 -- # : 0 00:23:10.040 21:38:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:10.040 21:38:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:10.040 21:38:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:10.040 21:38:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.040 21:38:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.040 21:38:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:10.040 21:38:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:10.040 21:38:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:10.040 21:38:32 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:10.040 21:38:32 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:10.040 21:38:32 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:10.040 21:38:32 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:10.040 21:38:32 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:10.040 21:38:32 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:10.040 21:38:32 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:10.040 21:38:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:10.040 21:38:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.040 21:38:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:10.040 21:38:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:10.040 21:38:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:10.040 21:38:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.040 21:38:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.040 21:38:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.040 21:38:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:10.040 21:38:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:10.040 21:38:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:10.040 21:38:32 -- common/autotest_common.sh@10 -- # set +x 00:23:16.606 21:38:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:16.606 21:38:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:16.606 21:38:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:16.606 21:38:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:16.606 21:38:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:16.606 21:38:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:16.606 21:38:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:16.606 21:38:39 -- nvmf/common.sh@295 -- # net_devs=() 00:23:16.606 21:38:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:16.606 21:38:39 -- nvmf/common.sh@296 -- # e810=() 00:23:16.606 21:38:39 -- nvmf/common.sh@296 -- # local -ga e810 00:23:16.606 21:38:39 -- nvmf/common.sh@297 -- # x722=() 00:23:16.606 21:38:39 -- nvmf/common.sh@297 -- # local -ga x722 00:23:16.606 21:38:39 -- nvmf/common.sh@298 -- # mlx=() 00:23:16.606 21:38:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:16.606 21:38:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.606 21:38:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.606 21:38:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.606 21:38:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.606 21:38:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.606 21:38:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.606 21:38:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.606 21:38:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.606 21:38:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.606 21:38:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.606 21:38:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.606 21:38:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:16.606 21:38:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:16.606 21:38:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:16.606 21:38:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:16.606 21:38:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:16.606 21:38:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:16.606 21:38:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.606 21:38:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:16.606 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:16.606 21:38:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.606 21:38:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.606 21:38:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.606 21:38:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.606 21:38:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.606 21:38:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.606 21:38:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:16.606 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:16.606 21:38:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.606 21:38:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.606 21:38:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.606 21:38:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.606 21:38:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.606 21:38:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:16.606 21:38:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:16.606 21:38:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:16.606 21:38:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.606 21:38:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.606 21:38:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:16.606 21:38:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.606 21:38:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:16.606 Found net devices under 0000:af:00.0: cvl_0_0 00:23:16.606 21:38:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.606 21:38:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.606 21:38:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.607 21:38:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:16.607 21:38:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.607 21:38:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:16.607 Found net devices under 0000:af:00.1: cvl_0_1 00:23:16.607 21:38:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.607 21:38:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:16.607 21:38:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:16.607 21:38:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:16.607 21:38:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:16.607 21:38:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:16.607 21:38:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.607 21:38:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.607 21:38:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.607 21:38:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:16.607 21:38:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.607 21:38:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.607 21:38:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:16.607 21:38:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.607 21:38:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.607 21:38:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:16.607 21:38:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:16.607 21:38:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.607 21:38:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.607 21:38:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.607 21:38:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.607 21:38:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:16.607 21:38:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.607 21:38:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.866 21:38:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.866 21:38:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:16.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:23:16.866 00:23:16.866 --- 10.0.0.2 ping statistics --- 00:23:16.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.866 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:23:16.866 21:38:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:23:16.866 00:23:16.866 --- 10.0.0.1 ping statistics --- 00:23:16.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.866 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:16.866 21:38:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.866 21:38:39 -- nvmf/common.sh@411 -- # return 0 00:23:16.866 21:38:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:16.866 21:38:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.866 21:38:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:16.866 21:38:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:16.866 21:38:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.866 21:38:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:16.866 21:38:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:16.866 21:38:39 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:16.866 21:38:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:16.866 21:38:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:16.866 21:38:39 -- common/autotest_common.sh@10 -- # set +x 00:23:16.866 21:38:39 -- nvmf/common.sh@470 -- # nvmfpid=2959468 00:23:16.866 21:38:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:16.866 21:38:39 -- nvmf/common.sh@471 -- # waitforlisten 2959468 00:23:16.866 21:38:39 -- common/autotest_common.sh@817 -- # '[' -z 2959468 ']' 00:23:16.866 21:38:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.866 21:38:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:16.866 21:38:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.866 21:38:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:16.866 21:38:39 -- common/autotest_common.sh@10 -- # set +x 00:23:16.866 [2024-04-24 21:38:39.635770] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:23:16.866 [2024-04-24 21:38:39.635821] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.866 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.866 [2024-04-24 21:38:39.711409] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.125 [2024-04-24 21:38:39.783104] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.125 [2024-04-24 21:38:39.783142] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.125 [2024-04-24 21:38:39.783152] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.125 [2024-04-24 21:38:39.783161] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.125 [2024-04-24 21:38:39.783168] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.125 [2024-04-24 21:38:39.783192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.701 21:38:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:17.701 21:38:40 -- common/autotest_common.sh@850 -- # return 0 00:23:17.701 21:38:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:17.701 21:38:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:17.701 21:38:40 -- common/autotest_common.sh@10 -- # set +x 00:23:17.701 21:38:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.701 21:38:40 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:17.701 21:38:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.701 21:38:40 -- common/autotest_common.sh@10 -- # set +x 00:23:17.701 [2024-04-24 21:38:40.482063] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.701 [2024-04-24 21:38:40.490232] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:17.701 null0 00:23:17.701 [2024-04-24 21:38:40.522204] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.702 21:38:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.702 21:38:40 -- host/discovery_remove_ifc.sh@59 -- # hostpid=2959744 00:23:17.702 21:38:40 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:17.702 21:38:40 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2959744 /tmp/host.sock 00:23:17.702 21:38:40 -- common/autotest_common.sh@817 -- # '[' -z 2959744 ']' 00:23:17.702 21:38:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:23:17.702 21:38:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:17.702 21:38:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:17.702 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:17.702 21:38:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:17.702 21:38:40 -- common/autotest_common.sh@10 -- # set +x 00:23:17.960 [2024-04-24 21:38:40.593397] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:23:17.960 [2024-04-24 21:38:40.593443] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2959744 ] 00:23:17.960 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.960 [2024-04-24 21:38:40.664178] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.960 [2024-04-24 21:38:40.737456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.527 21:38:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:18.527 21:38:41 -- common/autotest_common.sh@850 -- # return 0 00:23:18.527 21:38:41 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:18.527 21:38:41 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:18.527 21:38:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.527 21:38:41 -- common/autotest_common.sh@10 -- # set +x 00:23:18.527 21:38:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.527 21:38:41 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:18.527 21:38:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.527 21:38:41 -- common/autotest_common.sh@10 -- # set +x 00:23:18.786 21:38:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.786 21:38:41 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:18.786 21:38:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.786 21:38:41 -- common/autotest_common.sh@10 -- # set +x 00:23:19.721 [2024-04-24 21:38:42.532675] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:19.721 [2024-04-24 21:38:42.532699] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:19.721 [2024-04-24 21:38:42.532718] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:19.979 [2024-04-24 21:38:42.620971] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:19.979 [2024-04-24 21:38:42.723583] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:19.979 [2024-04-24 21:38:42.723628] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:19.979 [2024-04-24 21:38:42.723647] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:19.979 [2024-04-24 21:38:42.723661] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:19.979 [2024-04-24 21:38:42.723678] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:19.979 21:38:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.979 21:38:42 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:19.979 21:38:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:19.979 [2024-04-24 21:38:42.730565] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1071950 was disconnected and freed. delete nvme_qpair. 00:23:19.979 21:38:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.979 21:38:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:19.979 21:38:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.979 21:38:42 -- common/autotest_common.sh@10 -- # set +x 00:23:19.979 21:38:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:19.979 21:38:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:19.979 21:38:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.979 21:38:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:19.980 21:38:42 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:19.980 21:38:42 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:20.238 21:38:42 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:20.238 21:38:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:20.238 21:38:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.238 21:38:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:20.238 21:38:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.238 21:38:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:20.238 21:38:42 -- common/autotest_common.sh@10 -- # set +x 00:23:20.238 21:38:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:20.238 21:38:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.238 21:38:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:20.238 21:38:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:21.173 21:38:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:21.173 21:38:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:21.173 21:38:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:21.173 21:38:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.173 21:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:21.173 21:38:43 -- common/autotest_common.sh@10 -- # set +x 00:23:21.173 21:38:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:21.173 21:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:21.173 21:38:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:21.173 21:38:44 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:22.547 21:38:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:22.547 21:38:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.547 21:38:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:22.547 21:38:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:22.547 21:38:45 -- common/autotest_common.sh@10 -- # set +x 00:23:22.547 21:38:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:22.547 21:38:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:22.547 21:38:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:22.547 21:38:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:22.547 21:38:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:23.485 21:38:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:23.485 21:38:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.485 21:38:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:23.485 21:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.485 21:38:46 -- common/autotest_common.sh@10 -- # set +x 00:23:23.485 21:38:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:23.485 21:38:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:23.485 21:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.485 21:38:46 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:23.485 21:38:46 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:24.426 21:38:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:24.426 21:38:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.426 21:38:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:24.426 21:38:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:24.426 21:38:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.426 21:38:47 -- common/autotest_common.sh@10 -- # set +x 00:23:24.426 21:38:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:24.426 21:38:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.426 21:38:47 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:24.426 21:38:47 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:25.361 [2024-04-24 21:38:48.164510] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:25.361 [2024-04-24 21:38:48.164553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.361 [2024-04-24 21:38:48.164566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.361 [2024-04-24 21:38:48.164577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.361 [2024-04-24 21:38:48.164587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.361 [2024-04-24 21:38:48.164598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.361 [2024-04-24 21:38:48.164608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.361 [2024-04-24 21:38:48.164618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.361 [2024-04-24 21:38:48.164626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.361 [2024-04-24 21:38:48.164636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:25.361 [2024-04-24 21:38:48.164645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.361 [2024-04-24 21:38:48.164654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1038b90 is same with the state(5) to be set 00:23:25.361 21:38:48 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:25.361 21:38:48 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.361 [2024-04-24 21:38:48.174530] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1038b90 (9): Bad file descriptor 00:23:25.361 21:38:48 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:25.361 21:38:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.361 21:38:48 -- common/autotest_common.sh@10 -- # set +x 00:23:25.361 21:38:48 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:25.361 21:38:48 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:25.361 [2024-04-24 21:38:48.184570] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.361 21:38:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.361 21:38:48 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:25.361 21:38:48 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:26.734 21:38:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:26.734 21:38:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:26.734 21:38:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.734 21:38:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.734 21:38:49 -- common/autotest_common.sh@10 -- # set +x 00:23:26.734 21:38:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:26.734 21:38:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:26.734 [2024-04-24 21:38:49.233464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:27.668 [2024-04-24 21:38:50.256486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:27.668 [2024-04-24 21:38:50.256543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1038b90 with addr=10.0.0.2, port=4420 00:23:27.668 [2024-04-24 21:38:50.256565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1038b90 is same with the state(5) to be set 00:23:27.668 [2024-04-24 21:38:50.256975] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1038b90 (9): Bad file descriptor 00:23:27.668 [2024-04-24 21:38:50.257008] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:27.668 [2024-04-24 21:38:50.257033] bdev_nvme.c:6670:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:27.668 [2024-04-24 21:38:50.257061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.668 [2024-04-24 21:38:50.257078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:27.668 [2024-04-24 21:38:50.257094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.668 [2024-04-24 21:38:50.257107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:27.668 [2024-04-24 21:38:50.257120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.668 [2024-04-24 21:38:50.257133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:27.668 [2024-04-24 21:38:50.257146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.668 [2024-04-24 21:38:50.257159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:27.668 [2024-04-24 21:38:50.257173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.668 [2024-04-24 21:38:50.257185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:27.668 [2024-04-24 21:38:50.257198] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:27.668 [2024-04-24 21:38:50.257595] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1038070 (9): Bad file descriptor 00:23:27.668 [2024-04-24 21:38:50.258611] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:27.668 [2024-04-24 21:38:50.258630] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:27.668 21:38:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.668 21:38:50 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:27.668 21:38:50 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:28.604 21:38:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:28.604 21:38:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.604 21:38:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:28.604 21:38:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.604 21:38:51 -- common/autotest_common.sh@10 -- # set +x 00:23:28.604 21:38:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:28.604 21:38:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:28.604 21:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.604 21:38:51 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:28.604 21:38:51 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.604 21:38:51 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.604 21:38:51 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:28.604 21:38:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:28.604 21:38:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.604 21:38:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:28.604 21:38:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:28.604 21:38:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.604 21:38:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:28.604 21:38:51 -- common/autotest_common.sh@10 -- # set +x 00:23:28.604 21:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.862 21:38:51 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:28.862 21:38:51 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:29.798 [2024-04-24 21:38:52.317707] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:29.798 [2024-04-24 21:38:52.317724] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:29.799 [2024-04-24 21:38:52.317742] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:29.799 [2024-04-24 21:38:52.403993] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:29.799 21:38:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:29.799 21:38:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:29.799 21:38:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.799 21:38:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:29.799 21:38:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.799 21:38:52 -- common/autotest_common.sh@10 -- # set +x 00:23:29.799 21:38:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:29.799 21:38:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.799 21:38:52 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:29.799 21:38:52 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:29.799 [2024-04-24 21:38:52.587119] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:29.799 [2024-04-24 21:38:52.587152] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:29.799 [2024-04-24 21:38:52.587170] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:29.799 [2024-04-24 21:38:52.587184] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:29.799 [2024-04-24 21:38:52.587192] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:29.799 [2024-04-24 21:38:52.596603] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x107bfc0 was disconnected and freed. delete nvme_qpair. 00:23:30.733 21:38:53 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:30.733 21:38:53 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.733 21:38:53 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:30.733 21:38:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.733 21:38:53 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:30.733 21:38:53 -- common/autotest_common.sh@10 -- # set +x 00:23:30.733 21:38:53 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:30.733 21:38:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.733 21:38:53 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:30.733 21:38:53 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:30.733 21:38:53 -- host/discovery_remove_ifc.sh@90 -- # killprocess 2959744 00:23:30.733 21:38:53 -- common/autotest_common.sh@936 -- # '[' -z 2959744 ']' 00:23:30.733 21:38:53 -- common/autotest_common.sh@940 -- # kill -0 2959744 00:23:30.733 21:38:53 -- common/autotest_common.sh@941 -- # uname 00:23:30.992 21:38:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:30.992 21:38:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2959744 00:23:30.992 21:38:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:30.992 21:38:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:30.992 21:38:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2959744' 00:23:30.992 killing process with pid 2959744 00:23:30.992 21:38:53 -- common/autotest_common.sh@955 -- # kill 2959744 00:23:30.992 21:38:53 -- common/autotest_common.sh@960 -- # wait 2959744 00:23:30.992 21:38:53 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:30.992 21:38:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:30.992 21:38:53 -- nvmf/common.sh@117 -- # sync 00:23:30.992 21:38:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:30.992 21:38:53 -- nvmf/common.sh@120 -- # set +e 00:23:30.992 21:38:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:30.992 21:38:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:31.251 rmmod nvme_tcp 00:23:31.251 rmmod nvme_fabrics 00:23:31.251 rmmod nvme_keyring 00:23:31.251 21:38:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:31.251 21:38:53 -- nvmf/common.sh@124 -- # set -e 00:23:31.251 21:38:53 -- nvmf/common.sh@125 -- # return 0 00:23:31.251 21:38:53 -- nvmf/common.sh@478 -- # '[' -n 2959468 ']' 00:23:31.251 21:38:53 -- nvmf/common.sh@479 -- # killprocess 2959468 00:23:31.251 21:38:53 -- common/autotest_common.sh@936 -- # '[' -z 2959468 ']' 00:23:31.251 21:38:53 -- common/autotest_common.sh@940 -- # kill -0 2959468 00:23:31.251 21:38:53 -- common/autotest_common.sh@941 -- # uname 00:23:31.251 21:38:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:31.251 21:38:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2959468 00:23:31.251 21:38:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:31.251 21:38:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:31.251 21:38:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2959468' 00:23:31.251 killing process with pid 2959468 00:23:31.251 21:38:53 -- common/autotest_common.sh@955 -- # kill 2959468 00:23:31.251 21:38:53 -- common/autotest_common.sh@960 -- # wait 2959468 00:23:31.510 21:38:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:31.510 21:38:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:31.510 21:38:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:31.510 21:38:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:31.510 21:38:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:31.510 21:38:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.510 21:38:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.510 21:38:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.414 21:38:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:33.414 00:23:33.414 real 0m23.769s 00:23:33.414 user 0m27.490s 00:23:33.414 sys 0m7.496s 00:23:33.414 21:38:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:33.414 21:38:56 -- common/autotest_common.sh@10 -- # set +x 00:23:33.414 ************************************ 00:23:33.414 END TEST nvmf_discovery_remove_ifc 00:23:33.414 ************************************ 00:23:33.674 21:38:56 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:33.674 21:38:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:33.674 21:38:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:33.674 21:38:56 -- common/autotest_common.sh@10 -- # set +x 00:23:33.674 ************************************ 00:23:33.674 START TEST nvmf_identify_kernel_target 00:23:33.674 ************************************ 00:23:33.674 21:38:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:33.674 * Looking for test storage... 00:23:33.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.934 21:38:56 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.934 21:38:56 -- nvmf/common.sh@7 -- # uname -s 00:23:33.934 21:38:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.934 21:38:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.934 21:38:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.934 21:38:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.934 21:38:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.934 21:38:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.934 21:38:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.934 21:38:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.934 21:38:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.934 21:38:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.934 21:38:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:33.934 21:38:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:33.934 21:38:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.934 21:38:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.934 21:38:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.934 21:38:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.934 21:38:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.934 21:38:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.934 21:38:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.934 21:38:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.934 21:38:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.934 21:38:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.934 21:38:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.934 21:38:56 -- paths/export.sh@5 -- # export PATH 00:23:33.934 21:38:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.934 21:38:56 -- nvmf/common.sh@47 -- # : 0 00:23:33.934 21:38:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:33.934 21:38:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:33.934 21:38:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.934 21:38:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.934 21:38:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.934 21:38:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:33.934 21:38:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:33.934 21:38:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:33.934 21:38:56 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:33.934 21:38:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:33.934 21:38:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.934 21:38:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:33.934 21:38:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:33.934 21:38:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:33.934 21:38:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.934 21:38:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.934 21:38:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.934 21:38:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:33.934 21:38:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:33.934 21:38:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:33.934 21:38:56 -- common/autotest_common.sh@10 -- # set +x 00:23:40.533 21:39:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:40.533 21:39:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:40.533 21:39:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:40.533 21:39:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:40.533 21:39:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:40.533 21:39:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:40.533 21:39:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:40.533 21:39:02 -- nvmf/common.sh@295 -- # net_devs=() 00:23:40.533 21:39:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:40.533 21:39:02 -- nvmf/common.sh@296 -- # e810=() 00:23:40.533 21:39:02 -- nvmf/common.sh@296 -- # local -ga e810 00:23:40.533 21:39:02 -- nvmf/common.sh@297 -- # x722=() 00:23:40.533 21:39:02 -- nvmf/common.sh@297 -- # local -ga x722 00:23:40.533 21:39:02 -- nvmf/common.sh@298 -- # mlx=() 00:23:40.533 21:39:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:40.533 21:39:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.533 21:39:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.533 21:39:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.533 21:39:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.533 21:39:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.533 21:39:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.533 21:39:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.533 21:39:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.533 21:39:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.533 21:39:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.533 21:39:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.533 21:39:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:40.533 21:39:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:40.533 21:39:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:40.533 21:39:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:40.533 21:39:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:40.533 21:39:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:40.533 21:39:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:40.533 21:39:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:40.533 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:40.533 21:39:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:40.533 21:39:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:40.533 21:39:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.533 21:39:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.533 21:39:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:40.533 21:39:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:40.533 21:39:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:40.533 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:40.533 21:39:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:40.533 21:39:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:40.533 21:39:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.533 21:39:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.533 21:39:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:40.533 21:39:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:40.533 21:39:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:40.533 21:39:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:40.533 21:39:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:40.533 21:39:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.533 21:39:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:40.533 21:39:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.533 21:39:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:40.533 Found net devices under 0000:af:00.0: cvl_0_0 00:23:40.533 21:39:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.533 21:39:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:40.533 21:39:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.533 21:39:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:40.533 21:39:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.533 21:39:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:40.533 Found net devices under 0000:af:00.1: cvl_0_1 00:23:40.533 21:39:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.533 21:39:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:40.533 21:39:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:40.533 21:39:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:40.533 21:39:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:40.533 21:39:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:40.533 21:39:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.533 21:39:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.533 21:39:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.533 21:39:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:40.533 21:39:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.533 21:39:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.533 21:39:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:40.533 21:39:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.533 21:39:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.533 21:39:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:40.533 21:39:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:40.533 21:39:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.533 21:39:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.533 21:39:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.533 21:39:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.533 21:39:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:40.533 21:39:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.533 21:39:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.533 21:39:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.533 21:39:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:40.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:23:40.533 00:23:40.533 --- 10.0.0.2 ping statistics --- 00:23:40.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.533 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:23:40.533 21:39:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:23:40.533 00:23:40.533 --- 10.0.0.1 ping statistics --- 00:23:40.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.533 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:23:40.533 21:39:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.534 21:39:03 -- nvmf/common.sh@411 -- # return 0 00:23:40.534 21:39:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:40.534 21:39:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.534 21:39:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:40.534 21:39:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:40.534 21:39:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.534 21:39:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:40.534 21:39:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:40.534 21:39:03 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:40.534 21:39:03 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:40.534 21:39:03 -- nvmf/common.sh@717 -- # local ip 00:23:40.534 21:39:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:40.534 21:39:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:40.534 21:39:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.534 21:39:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.534 21:39:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:40.534 21:39:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.534 21:39:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:40.534 21:39:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:40.534 21:39:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:40.534 21:39:03 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:40.534 21:39:03 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:40.534 21:39:03 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:40.534 21:39:03 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:23:40.534 21:39:03 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:40.534 21:39:03 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:40.534 21:39:03 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:40.534 21:39:03 -- nvmf/common.sh@628 -- # local block nvme 00:23:40.534 21:39:03 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:23:40.534 21:39:03 -- nvmf/common.sh@631 -- # modprobe nvmet 00:23:40.534 21:39:03 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:40.534 21:39:03 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:43.819 Waiting for block devices as requested 00:23:43.819 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:43.819 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:43.819 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:43.819 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:44.077 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:44.077 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:44.077 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:44.078 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:44.335 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:44.335 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:44.335 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:44.594 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:44.594 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:44.594 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:44.853 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:44.853 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:44.853 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:23:45.112 21:39:07 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:45.112 21:39:07 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:45.112 21:39:07 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:23:45.112 21:39:07 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:45.112 21:39:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:45.112 21:39:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:45.112 21:39:07 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:23:45.112 21:39:07 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:45.112 21:39:07 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:45.112 No valid GPT data, bailing 00:23:45.112 21:39:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:45.112 21:39:07 -- scripts/common.sh@391 -- # pt= 00:23:45.112 21:39:07 -- scripts/common.sh@392 -- # return 1 00:23:45.112 21:39:07 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:23:45.112 21:39:07 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:23:45.112 21:39:07 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:45.112 21:39:07 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:45.112 21:39:07 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:45.112 21:39:07 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:45.112 21:39:07 -- nvmf/common.sh@656 -- # echo 1 00:23:45.112 21:39:07 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:23:45.112 21:39:07 -- nvmf/common.sh@658 -- # echo 1 00:23:45.112 21:39:07 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:23:45.112 21:39:07 -- nvmf/common.sh@661 -- # echo tcp 00:23:45.112 21:39:07 -- nvmf/common.sh@662 -- # echo 4420 00:23:45.112 21:39:07 -- nvmf/common.sh@663 -- # echo ipv4 00:23:45.112 21:39:07 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:45.112 21:39:07 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:23:45.112 00:23:45.112 Discovery Log Number of Records 2, Generation counter 2 00:23:45.112 =====Discovery Log Entry 0====== 00:23:45.112 trtype: tcp 00:23:45.112 adrfam: ipv4 00:23:45.112 subtype: current discovery subsystem 00:23:45.112 treq: not specified, sq flow control disable supported 00:23:45.112 portid: 1 00:23:45.112 trsvcid: 4420 00:23:45.112 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:45.112 traddr: 10.0.0.1 00:23:45.112 eflags: none 00:23:45.112 sectype: none 00:23:45.112 =====Discovery Log Entry 1====== 00:23:45.112 trtype: tcp 00:23:45.112 adrfam: ipv4 00:23:45.112 subtype: nvme subsystem 00:23:45.112 treq: not specified, sq flow control disable supported 00:23:45.112 portid: 1 00:23:45.112 trsvcid: 4420 00:23:45.112 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:45.112 traddr: 10.0.0.1 00:23:45.112 eflags: none 00:23:45.112 sectype: none 00:23:45.112 21:39:07 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:45.112 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:45.373 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.373 ===================================================== 00:23:45.373 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:45.373 ===================================================== 00:23:45.373 Controller Capabilities/Features 00:23:45.373 ================================ 00:23:45.373 Vendor ID: 0000 00:23:45.373 Subsystem Vendor ID: 0000 00:23:45.373 Serial Number: e5cc8ee9d632ca2a0281 00:23:45.373 Model Number: Linux 00:23:45.373 Firmware Version: 6.7.0-68 00:23:45.373 Recommended Arb Burst: 0 00:23:45.373 IEEE OUI Identifier: 00 00 00 00:23:45.373 Multi-path I/O 00:23:45.373 May have multiple subsystem ports: No 00:23:45.373 May have multiple controllers: No 00:23:45.373 Associated with SR-IOV VF: No 00:23:45.373 Max Data Transfer Size: Unlimited 00:23:45.373 Max Number of Namespaces: 0 00:23:45.373 Max Number of I/O Queues: 1024 00:23:45.373 NVMe Specification Version (VS): 1.3 00:23:45.373 NVMe Specification Version (Identify): 1.3 00:23:45.373 Maximum Queue Entries: 1024 00:23:45.373 Contiguous Queues Required: No 00:23:45.373 Arbitration Mechanisms Supported 00:23:45.373 Weighted Round Robin: Not Supported 00:23:45.373 Vendor Specific: Not Supported 00:23:45.373 Reset Timeout: 7500 ms 00:23:45.373 Doorbell Stride: 4 bytes 00:23:45.373 NVM Subsystem Reset: Not Supported 00:23:45.373 Command Sets Supported 00:23:45.373 NVM Command Set: Supported 00:23:45.373 Boot Partition: Not Supported 00:23:45.373 Memory Page Size Minimum: 4096 bytes 00:23:45.373 Memory Page Size Maximum: 4096 bytes 00:23:45.373 Persistent Memory Region: Not Supported 00:23:45.373 Optional Asynchronous Events Supported 00:23:45.373 Namespace Attribute Notices: Not Supported 00:23:45.373 Firmware Activation Notices: Not Supported 00:23:45.373 ANA Change Notices: Not Supported 00:23:45.373 PLE Aggregate Log Change Notices: Not Supported 00:23:45.373 LBA Status Info Alert Notices: Not Supported 00:23:45.373 EGE Aggregate Log Change Notices: Not Supported 00:23:45.373 Normal NVM Subsystem Shutdown event: Not Supported 00:23:45.373 Zone Descriptor Change Notices: Not Supported 00:23:45.373 Discovery Log Change Notices: Supported 00:23:45.373 Controller Attributes 00:23:45.373 128-bit Host Identifier: Not Supported 00:23:45.373 Non-Operational Permissive Mode: Not Supported 00:23:45.373 NVM Sets: Not Supported 00:23:45.373 Read Recovery Levels: Not Supported 00:23:45.373 Endurance Groups: Not Supported 00:23:45.373 Predictable Latency Mode: Not Supported 00:23:45.373 Traffic Based Keep ALive: Not Supported 00:23:45.373 Namespace Granularity: Not Supported 00:23:45.373 SQ Associations: Not Supported 00:23:45.373 UUID List: Not Supported 00:23:45.373 Multi-Domain Subsystem: Not Supported 00:23:45.373 Fixed Capacity Management: Not Supported 00:23:45.373 Variable Capacity Management: Not Supported 00:23:45.373 Delete Endurance Group: Not Supported 00:23:45.373 Delete NVM Set: Not Supported 00:23:45.373 Extended LBA Formats Supported: Not Supported 00:23:45.373 Flexible Data Placement Supported: Not Supported 00:23:45.373 00:23:45.373 Controller Memory Buffer Support 00:23:45.373 ================================ 00:23:45.373 Supported: No 00:23:45.373 00:23:45.373 Persistent Memory Region Support 00:23:45.373 ================================ 00:23:45.373 Supported: No 00:23:45.373 00:23:45.373 Admin Command Set Attributes 00:23:45.373 ============================ 00:23:45.373 Security Send/Receive: Not Supported 00:23:45.373 Format NVM: Not Supported 00:23:45.373 Firmware Activate/Download: Not Supported 00:23:45.373 Namespace Management: Not Supported 00:23:45.373 Device Self-Test: Not Supported 00:23:45.373 Directives: Not Supported 00:23:45.373 NVMe-MI: Not Supported 00:23:45.373 Virtualization Management: Not Supported 00:23:45.373 Doorbell Buffer Config: Not Supported 00:23:45.373 Get LBA Status Capability: Not Supported 00:23:45.373 Command & Feature Lockdown Capability: Not Supported 00:23:45.373 Abort Command Limit: 1 00:23:45.373 Async Event Request Limit: 1 00:23:45.373 Number of Firmware Slots: N/A 00:23:45.373 Firmware Slot 1 Read-Only: N/A 00:23:45.373 Firmware Activation Without Reset: N/A 00:23:45.373 Multiple Update Detection Support: N/A 00:23:45.373 Firmware Update Granularity: No Information Provided 00:23:45.373 Per-Namespace SMART Log: No 00:23:45.373 Asymmetric Namespace Access Log Page: Not Supported 00:23:45.373 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:45.373 Command Effects Log Page: Not Supported 00:23:45.374 Get Log Page Extended Data: Supported 00:23:45.374 Telemetry Log Pages: Not Supported 00:23:45.374 Persistent Event Log Pages: Not Supported 00:23:45.374 Supported Log Pages Log Page: May Support 00:23:45.374 Commands Supported & Effects Log Page: Not Supported 00:23:45.374 Feature Identifiers & Effects Log Page:May Support 00:23:45.374 NVMe-MI Commands & Effects Log Page: May Support 00:23:45.374 Data Area 4 for Telemetry Log: Not Supported 00:23:45.374 Error Log Page Entries Supported: 1 00:23:45.374 Keep Alive: Not Supported 00:23:45.374 00:23:45.374 NVM Command Set Attributes 00:23:45.374 ========================== 00:23:45.374 Submission Queue Entry Size 00:23:45.374 Max: 1 00:23:45.374 Min: 1 00:23:45.374 Completion Queue Entry Size 00:23:45.374 Max: 1 00:23:45.374 Min: 1 00:23:45.374 Number of Namespaces: 0 00:23:45.374 Compare Command: Not Supported 00:23:45.374 Write Uncorrectable Command: Not Supported 00:23:45.374 Dataset Management Command: Not Supported 00:23:45.374 Write Zeroes Command: Not Supported 00:23:45.374 Set Features Save Field: Not Supported 00:23:45.374 Reservations: Not Supported 00:23:45.374 Timestamp: Not Supported 00:23:45.374 Copy: Not Supported 00:23:45.374 Volatile Write Cache: Not Present 00:23:45.374 Atomic Write Unit (Normal): 1 00:23:45.374 Atomic Write Unit (PFail): 1 00:23:45.374 Atomic Compare & Write Unit: 1 00:23:45.374 Fused Compare & Write: Not Supported 00:23:45.374 Scatter-Gather List 00:23:45.374 SGL Command Set: Supported 00:23:45.374 SGL Keyed: Not Supported 00:23:45.374 SGL Bit Bucket Descriptor: Not Supported 00:23:45.374 SGL Metadata Pointer: Not Supported 00:23:45.374 Oversized SGL: Not Supported 00:23:45.374 SGL Metadata Address: Not Supported 00:23:45.374 SGL Offset: Supported 00:23:45.374 Transport SGL Data Block: Not Supported 00:23:45.374 Replay Protected Memory Block: Not Supported 00:23:45.374 00:23:45.374 Firmware Slot Information 00:23:45.374 ========================= 00:23:45.374 Active slot: 0 00:23:45.374 00:23:45.374 00:23:45.374 Error Log 00:23:45.374 ========= 00:23:45.374 00:23:45.374 Active Namespaces 00:23:45.374 ================= 00:23:45.374 Discovery Log Page 00:23:45.374 ================== 00:23:45.374 Generation Counter: 2 00:23:45.374 Number of Records: 2 00:23:45.374 Record Format: 0 00:23:45.374 00:23:45.374 Discovery Log Entry 0 00:23:45.374 ---------------------- 00:23:45.374 Transport Type: 3 (TCP) 00:23:45.374 Address Family: 1 (IPv4) 00:23:45.374 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:45.374 Entry Flags: 00:23:45.374 Duplicate Returned Information: 0 00:23:45.374 Explicit Persistent Connection Support for Discovery: 0 00:23:45.374 Transport Requirements: 00:23:45.374 Secure Channel: Not Specified 00:23:45.374 Port ID: 1 (0x0001) 00:23:45.374 Controller ID: 65535 (0xffff) 00:23:45.374 Admin Max SQ Size: 32 00:23:45.374 Transport Service Identifier: 4420 00:23:45.374 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:45.374 Transport Address: 10.0.0.1 00:23:45.374 Discovery Log Entry 1 00:23:45.374 ---------------------- 00:23:45.374 Transport Type: 3 (TCP) 00:23:45.374 Address Family: 1 (IPv4) 00:23:45.374 Subsystem Type: 2 (NVM Subsystem) 00:23:45.374 Entry Flags: 00:23:45.374 Duplicate Returned Information: 0 00:23:45.374 Explicit Persistent Connection Support for Discovery: 0 00:23:45.374 Transport Requirements: 00:23:45.374 Secure Channel: Not Specified 00:23:45.374 Port ID: 1 (0x0001) 00:23:45.374 Controller ID: 65535 (0xffff) 00:23:45.374 Admin Max SQ Size: 32 00:23:45.374 Transport Service Identifier: 4420 00:23:45.374 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:45.374 Transport Address: 10.0.0.1 00:23:45.374 21:39:08 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:45.374 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.374 get_feature(0x01) failed 00:23:45.374 get_feature(0x02) failed 00:23:45.374 get_feature(0x04) failed 00:23:45.374 ===================================================== 00:23:45.374 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:45.374 ===================================================== 00:23:45.374 Controller Capabilities/Features 00:23:45.374 ================================ 00:23:45.374 Vendor ID: 0000 00:23:45.374 Subsystem Vendor ID: 0000 00:23:45.374 Serial Number: 7e11f5d7e9265b379fec 00:23:45.374 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:45.374 Firmware Version: 6.7.0-68 00:23:45.374 Recommended Arb Burst: 6 00:23:45.374 IEEE OUI Identifier: 00 00 00 00:23:45.374 Multi-path I/O 00:23:45.374 May have multiple subsystem ports: Yes 00:23:45.374 May have multiple controllers: Yes 00:23:45.374 Associated with SR-IOV VF: No 00:23:45.374 Max Data Transfer Size: Unlimited 00:23:45.374 Max Number of Namespaces: 1024 00:23:45.374 Max Number of I/O Queues: 128 00:23:45.374 NVMe Specification Version (VS): 1.3 00:23:45.374 NVMe Specification Version (Identify): 1.3 00:23:45.374 Maximum Queue Entries: 1024 00:23:45.374 Contiguous Queues Required: No 00:23:45.374 Arbitration Mechanisms Supported 00:23:45.374 Weighted Round Robin: Not Supported 00:23:45.374 Vendor Specific: Not Supported 00:23:45.374 Reset Timeout: 7500 ms 00:23:45.374 Doorbell Stride: 4 bytes 00:23:45.374 NVM Subsystem Reset: Not Supported 00:23:45.374 Command Sets Supported 00:23:45.374 NVM Command Set: Supported 00:23:45.374 Boot Partition: Not Supported 00:23:45.374 Memory Page Size Minimum: 4096 bytes 00:23:45.374 Memory Page Size Maximum: 4096 bytes 00:23:45.374 Persistent Memory Region: Not Supported 00:23:45.374 Optional Asynchronous Events Supported 00:23:45.374 Namespace Attribute Notices: Supported 00:23:45.374 Firmware Activation Notices: Not Supported 00:23:45.374 ANA Change Notices: Supported 00:23:45.374 PLE Aggregate Log Change Notices: Not Supported 00:23:45.374 LBA Status Info Alert Notices: Not Supported 00:23:45.374 EGE Aggregate Log Change Notices: Not Supported 00:23:45.374 Normal NVM Subsystem Shutdown event: Not Supported 00:23:45.374 Zone Descriptor Change Notices: Not Supported 00:23:45.374 Discovery Log Change Notices: Not Supported 00:23:45.374 Controller Attributes 00:23:45.374 128-bit Host Identifier: Supported 00:23:45.374 Non-Operational Permissive Mode: Not Supported 00:23:45.374 NVM Sets: Not Supported 00:23:45.374 Read Recovery Levels: Not Supported 00:23:45.374 Endurance Groups: Not Supported 00:23:45.374 Predictable Latency Mode: Not Supported 00:23:45.374 Traffic Based Keep ALive: Supported 00:23:45.374 Namespace Granularity: Not Supported 00:23:45.374 SQ Associations: Not Supported 00:23:45.374 UUID List: Not Supported 00:23:45.374 Multi-Domain Subsystem: Not Supported 00:23:45.374 Fixed Capacity Management: Not Supported 00:23:45.374 Variable Capacity Management: Not Supported 00:23:45.374 Delete Endurance Group: Not Supported 00:23:45.374 Delete NVM Set: Not Supported 00:23:45.374 Extended LBA Formats Supported: Not Supported 00:23:45.374 Flexible Data Placement Supported: Not Supported 00:23:45.374 00:23:45.374 Controller Memory Buffer Support 00:23:45.374 ================================ 00:23:45.374 Supported: No 00:23:45.374 00:23:45.374 Persistent Memory Region Support 00:23:45.374 ================================ 00:23:45.374 Supported: No 00:23:45.374 00:23:45.374 Admin Command Set Attributes 00:23:45.374 ============================ 00:23:45.374 Security Send/Receive: Not Supported 00:23:45.374 Format NVM: Not Supported 00:23:45.374 Firmware Activate/Download: Not Supported 00:23:45.374 Namespace Management: Not Supported 00:23:45.374 Device Self-Test: Not Supported 00:23:45.374 Directives: Not Supported 00:23:45.374 NVMe-MI: Not Supported 00:23:45.374 Virtualization Management: Not Supported 00:23:45.374 Doorbell Buffer Config: Not Supported 00:23:45.374 Get LBA Status Capability: Not Supported 00:23:45.374 Command & Feature Lockdown Capability: Not Supported 00:23:45.374 Abort Command Limit: 4 00:23:45.374 Async Event Request Limit: 4 00:23:45.374 Number of Firmware Slots: N/A 00:23:45.374 Firmware Slot 1 Read-Only: N/A 00:23:45.374 Firmware Activation Without Reset: N/A 00:23:45.374 Multiple Update Detection Support: N/A 00:23:45.374 Firmware Update Granularity: No Information Provided 00:23:45.374 Per-Namespace SMART Log: Yes 00:23:45.374 Asymmetric Namespace Access Log Page: Supported 00:23:45.374 ANA Transition Time : 10 sec 00:23:45.374 00:23:45.374 Asymmetric Namespace Access Capabilities 00:23:45.374 ANA Optimized State : Supported 00:23:45.374 ANA Non-Optimized State : Supported 00:23:45.374 ANA Inaccessible State : Supported 00:23:45.374 ANA Persistent Loss State : Supported 00:23:45.374 ANA Change State : Supported 00:23:45.374 ANAGRPID is not changed : No 00:23:45.374 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:45.374 00:23:45.374 ANA Group Identifier Maximum : 128 00:23:45.374 Number of ANA Group Identifiers : 128 00:23:45.374 Max Number of Allowed Namespaces : 1024 00:23:45.374 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:45.374 Command Effects Log Page: Supported 00:23:45.374 Get Log Page Extended Data: Supported 00:23:45.374 Telemetry Log Pages: Not Supported 00:23:45.374 Persistent Event Log Pages: Not Supported 00:23:45.375 Supported Log Pages Log Page: May Support 00:23:45.375 Commands Supported & Effects Log Page: Not Supported 00:23:45.375 Feature Identifiers & Effects Log Page:May Support 00:23:45.375 NVMe-MI Commands & Effects Log Page: May Support 00:23:45.375 Data Area 4 for Telemetry Log: Not Supported 00:23:45.375 Error Log Page Entries Supported: 128 00:23:45.375 Keep Alive: Supported 00:23:45.375 Keep Alive Granularity: 1000 ms 00:23:45.375 00:23:45.375 NVM Command Set Attributes 00:23:45.375 ========================== 00:23:45.375 Submission Queue Entry Size 00:23:45.375 Max: 64 00:23:45.375 Min: 64 00:23:45.375 Completion Queue Entry Size 00:23:45.375 Max: 16 00:23:45.375 Min: 16 00:23:45.375 Number of Namespaces: 1024 00:23:45.375 Compare Command: Not Supported 00:23:45.375 Write Uncorrectable Command: Not Supported 00:23:45.375 Dataset Management Command: Supported 00:23:45.375 Write Zeroes Command: Supported 00:23:45.375 Set Features Save Field: Not Supported 00:23:45.375 Reservations: Not Supported 00:23:45.375 Timestamp: Not Supported 00:23:45.375 Copy: Not Supported 00:23:45.375 Volatile Write Cache: Present 00:23:45.375 Atomic Write Unit (Normal): 1 00:23:45.375 Atomic Write Unit (PFail): 1 00:23:45.375 Atomic Compare & Write Unit: 1 00:23:45.375 Fused Compare & Write: Not Supported 00:23:45.375 Scatter-Gather List 00:23:45.375 SGL Command Set: Supported 00:23:45.375 SGL Keyed: Not Supported 00:23:45.375 SGL Bit Bucket Descriptor: Not Supported 00:23:45.375 SGL Metadata Pointer: Not Supported 00:23:45.375 Oversized SGL: Not Supported 00:23:45.375 SGL Metadata Address: Not Supported 00:23:45.375 SGL Offset: Supported 00:23:45.375 Transport SGL Data Block: Not Supported 00:23:45.375 Replay Protected Memory Block: Not Supported 00:23:45.375 00:23:45.375 Firmware Slot Information 00:23:45.375 ========================= 00:23:45.375 Active slot: 0 00:23:45.375 00:23:45.375 Asymmetric Namespace Access 00:23:45.375 =========================== 00:23:45.375 Change Count : 0 00:23:45.375 Number of ANA Group Descriptors : 1 00:23:45.375 ANA Group Descriptor : 0 00:23:45.375 ANA Group ID : 1 00:23:45.375 Number of NSID Values : 1 00:23:45.375 Change Count : 0 00:23:45.375 ANA State : 1 00:23:45.375 Namespace Identifier : 1 00:23:45.375 00:23:45.375 Commands Supported and Effects 00:23:45.375 ============================== 00:23:45.375 Admin Commands 00:23:45.375 -------------- 00:23:45.375 Get Log Page (02h): Supported 00:23:45.375 Identify (06h): Supported 00:23:45.375 Abort (08h): Supported 00:23:45.375 Set Features (09h): Supported 00:23:45.375 Get Features (0Ah): Supported 00:23:45.375 Asynchronous Event Request (0Ch): Supported 00:23:45.375 Keep Alive (18h): Supported 00:23:45.375 I/O Commands 00:23:45.375 ------------ 00:23:45.375 Flush (00h): Supported 00:23:45.375 Write (01h): Supported LBA-Change 00:23:45.375 Read (02h): Supported 00:23:45.375 Write Zeroes (08h): Supported LBA-Change 00:23:45.375 Dataset Management (09h): Supported 00:23:45.375 00:23:45.375 Error Log 00:23:45.375 ========= 00:23:45.375 Entry: 0 00:23:45.375 Error Count: 0x3 00:23:45.375 Submission Queue Id: 0x0 00:23:45.375 Command Id: 0x5 00:23:45.375 Phase Bit: 0 00:23:45.375 Status Code: 0x2 00:23:45.375 Status Code Type: 0x0 00:23:45.375 Do Not Retry: 1 00:23:45.375 Error Location: 0x28 00:23:45.375 LBA: 0x0 00:23:45.375 Namespace: 0x0 00:23:45.375 Vendor Log Page: 0x0 00:23:45.375 ----------- 00:23:45.375 Entry: 1 00:23:45.375 Error Count: 0x2 00:23:45.375 Submission Queue Id: 0x0 00:23:45.375 Command Id: 0x5 00:23:45.375 Phase Bit: 0 00:23:45.375 Status Code: 0x2 00:23:45.375 Status Code Type: 0x0 00:23:45.375 Do Not Retry: 1 00:23:45.375 Error Location: 0x28 00:23:45.375 LBA: 0x0 00:23:45.375 Namespace: 0x0 00:23:45.375 Vendor Log Page: 0x0 00:23:45.375 ----------- 00:23:45.375 Entry: 2 00:23:45.375 Error Count: 0x1 00:23:45.375 Submission Queue Id: 0x0 00:23:45.375 Command Id: 0x4 00:23:45.375 Phase Bit: 0 00:23:45.375 Status Code: 0x2 00:23:45.375 Status Code Type: 0x0 00:23:45.375 Do Not Retry: 1 00:23:45.375 Error Location: 0x28 00:23:45.375 LBA: 0x0 00:23:45.375 Namespace: 0x0 00:23:45.375 Vendor Log Page: 0x0 00:23:45.375 00:23:45.375 Number of Queues 00:23:45.375 ================ 00:23:45.375 Number of I/O Submission Queues: 128 00:23:45.375 Number of I/O Completion Queues: 128 00:23:45.375 00:23:45.375 ZNS Specific Controller Data 00:23:45.375 ============================ 00:23:45.375 Zone Append Size Limit: 0 00:23:45.375 00:23:45.375 00:23:45.375 Active Namespaces 00:23:45.375 ================= 00:23:45.375 get_feature(0x05) failed 00:23:45.375 Namespace ID:1 00:23:45.375 Command Set Identifier: NVM (00h) 00:23:45.375 Deallocate: Supported 00:23:45.375 Deallocated/Unwritten Error: Not Supported 00:23:45.375 Deallocated Read Value: Unknown 00:23:45.375 Deallocate in Write Zeroes: Not Supported 00:23:45.375 Deallocated Guard Field: 0xFFFF 00:23:45.375 Flush: Supported 00:23:45.375 Reservation: Not Supported 00:23:45.375 Namespace Sharing Capabilities: Multiple Controllers 00:23:45.375 Size (in LBAs): 3125627568 (1490GiB) 00:23:45.375 Capacity (in LBAs): 3125627568 (1490GiB) 00:23:45.375 Utilization (in LBAs): 3125627568 (1490GiB) 00:23:45.375 UUID: f7ccceb4-e0dc-4187-931f-1dc3aabf1588 00:23:45.375 Thin Provisioning: Not Supported 00:23:45.375 Per-NS Atomic Units: Yes 00:23:45.375 Atomic Boundary Size (Normal): 0 00:23:45.375 Atomic Boundary Size (PFail): 0 00:23:45.375 Atomic Boundary Offset: 0 00:23:45.375 NGUID/EUI64 Never Reused: No 00:23:45.375 ANA group ID: 1 00:23:45.375 Namespace Write Protected: No 00:23:45.375 Number of LBA Formats: 1 00:23:45.375 Current LBA Format: LBA Format #00 00:23:45.375 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:45.375 00:23:45.375 21:39:08 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:45.375 21:39:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:45.375 21:39:08 -- nvmf/common.sh@117 -- # sync 00:23:45.375 21:39:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:45.375 21:39:08 -- nvmf/common.sh@120 -- # set +e 00:23:45.375 21:39:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:45.375 21:39:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:45.375 rmmod nvme_tcp 00:23:45.375 rmmod nvme_fabrics 00:23:45.375 21:39:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:45.375 21:39:08 -- nvmf/common.sh@124 -- # set -e 00:23:45.375 21:39:08 -- nvmf/common.sh@125 -- # return 0 00:23:45.375 21:39:08 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:23:45.375 21:39:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:45.375 21:39:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:45.375 21:39:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:45.375 21:39:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:45.375 21:39:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:45.375 21:39:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.375 21:39:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.375 21:39:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.909 21:39:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:47.909 21:39:10 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:47.909 21:39:10 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:47.909 21:39:10 -- nvmf/common.sh@675 -- # echo 0 00:23:47.909 21:39:10 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:47.909 21:39:10 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:47.909 21:39:10 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:47.909 21:39:10 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:47.909 21:39:10 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:23:47.909 21:39:10 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:23:47.909 21:39:10 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:51.195 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:51.195 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:51.195 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:51.195 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:51.195 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:51.195 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:51.195 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:51.195 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:51.195 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:51.195 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:51.195 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:51.195 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:51.195 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:51.195 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:51.195 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:51.195 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:52.574 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:23:52.574 00:23:52.574 real 0m18.803s 00:23:52.574 user 0m4.341s 00:23:52.574 sys 0m9.982s 00:23:52.574 21:39:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:52.574 21:39:15 -- common/autotest_common.sh@10 -- # set +x 00:23:52.574 ************************************ 00:23:52.574 END TEST nvmf_identify_kernel_target 00:23:52.574 ************************************ 00:23:52.574 21:39:15 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:52.574 21:39:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:52.574 21:39:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:52.574 21:39:15 -- common/autotest_common.sh@10 -- # set +x 00:23:52.574 ************************************ 00:23:52.574 START TEST nvmf_auth 00:23:52.574 ************************************ 00:23:52.574 21:39:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:52.834 * Looking for test storage... 00:23:52.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:52.834 21:39:15 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.834 21:39:15 -- nvmf/common.sh@7 -- # uname -s 00:23:52.834 21:39:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.834 21:39:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.834 21:39:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.834 21:39:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.834 21:39:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.834 21:39:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.834 21:39:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.834 21:39:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.834 21:39:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.834 21:39:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.834 21:39:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:52.834 21:39:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:52.834 21:39:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.834 21:39:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.834 21:39:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.834 21:39:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.834 21:39:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.834 21:39:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.834 21:39:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.834 21:39:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.834 21:39:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.834 21:39:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.834 21:39:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.834 21:39:15 -- paths/export.sh@5 -- # export PATH 00:23:52.834 21:39:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.834 21:39:15 -- nvmf/common.sh@47 -- # : 0 00:23:52.834 21:39:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:52.834 21:39:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:52.834 21:39:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.834 21:39:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.834 21:39:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.834 21:39:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:52.834 21:39:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:52.834 21:39:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:52.834 21:39:15 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:52.834 21:39:15 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:52.834 21:39:15 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:52.834 21:39:15 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:52.834 21:39:15 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:52.834 21:39:15 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:52.834 21:39:15 -- host/auth.sh@21 -- # keys=() 00:23:52.834 21:39:15 -- host/auth.sh@77 -- # nvmftestinit 00:23:52.834 21:39:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:52.834 21:39:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.834 21:39:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:52.834 21:39:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:52.834 21:39:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:52.834 21:39:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.834 21:39:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.834 21:39:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.834 21:39:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:52.834 21:39:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:52.834 21:39:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:52.834 21:39:15 -- common/autotest_common.sh@10 -- # set +x 00:23:59.442 21:39:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:59.442 21:39:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:59.442 21:39:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:59.442 21:39:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:59.442 21:39:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:59.442 21:39:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:59.442 21:39:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:59.442 21:39:22 -- nvmf/common.sh@295 -- # net_devs=() 00:23:59.442 21:39:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:59.442 21:39:22 -- nvmf/common.sh@296 -- # e810=() 00:23:59.442 21:39:22 -- nvmf/common.sh@296 -- # local -ga e810 00:23:59.442 21:39:22 -- nvmf/common.sh@297 -- # x722=() 00:23:59.442 21:39:22 -- nvmf/common.sh@297 -- # local -ga x722 00:23:59.442 21:39:22 -- nvmf/common.sh@298 -- # mlx=() 00:23:59.442 21:39:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:59.442 21:39:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.442 21:39:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.442 21:39:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.442 21:39:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.442 21:39:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.442 21:39:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.442 21:39:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.442 21:39:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.442 21:39:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.442 21:39:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.442 21:39:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.442 21:39:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:59.442 21:39:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:59.442 21:39:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:59.442 21:39:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:59.442 21:39:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:59.442 21:39:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:59.442 21:39:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.442 21:39:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:59.442 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:59.442 21:39:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.442 21:39:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.442 21:39:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.442 21:39:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.442 21:39:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.442 21:39:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.442 21:39:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:59.442 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:59.442 21:39:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.442 21:39:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.442 21:39:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.442 21:39:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.442 21:39:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.442 21:39:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:59.442 21:39:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:59.442 21:39:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:59.442 21:39:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.442 21:39:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.442 21:39:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:59.442 21:39:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.442 21:39:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:59.442 Found net devices under 0000:af:00.0: cvl_0_0 00:23:59.442 21:39:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.442 21:39:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.442 21:39:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.442 21:39:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:59.442 21:39:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.442 21:39:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:59.442 Found net devices under 0000:af:00.1: cvl_0_1 00:23:59.442 21:39:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.442 21:39:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:59.442 21:39:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:59.442 21:39:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:59.442 21:39:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:59.442 21:39:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:59.442 21:39:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.442 21:39:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.442 21:39:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.442 21:39:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:59.442 21:39:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.442 21:39:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.442 21:39:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:59.442 21:39:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.442 21:39:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.442 21:39:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:59.442 21:39:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:59.442 21:39:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.442 21:39:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.442 21:39:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.442 21:39:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.442 21:39:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:59.442 21:39:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:59.701 21:39:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:59.701 21:39:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:59.701 21:39:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:59.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:23:59.701 00:23:59.701 --- 10.0.0.2 ping statistics --- 00:23:59.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.701 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:23:59.701 21:39:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:59.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:23:59.701 00:23:59.701 --- 10.0.0.1 ping statistics --- 00:23:59.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.701 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:23:59.701 21:39:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.701 21:39:22 -- nvmf/common.sh@411 -- # return 0 00:23:59.701 21:39:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:59.701 21:39:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.701 21:39:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:59.701 21:39:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:59.701 21:39:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.702 21:39:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:59.702 21:39:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:59.702 21:39:22 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:23:59.702 21:39:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:59.702 21:39:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:59.702 21:39:22 -- common/autotest_common.sh@10 -- # set +x 00:23:59.702 21:39:22 -- nvmf/common.sh@470 -- # nvmfpid=2973107 00:23:59.702 21:39:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:59.702 21:39:22 -- nvmf/common.sh@471 -- # waitforlisten 2973107 00:23:59.702 21:39:22 -- common/autotest_common.sh@817 -- # '[' -z 2973107 ']' 00:23:59.702 21:39:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.702 21:39:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:59.702 21:39:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.702 21:39:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:59.702 21:39:22 -- common/autotest_common.sh@10 -- # set +x 00:24:00.637 21:39:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:00.637 21:39:23 -- common/autotest_common.sh@850 -- # return 0 00:24:00.637 21:39:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:00.637 21:39:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:00.637 21:39:23 -- common/autotest_common.sh@10 -- # set +x 00:24:00.637 21:39:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.637 21:39:23 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:00.637 21:39:23 -- host/auth.sh@81 -- # gen_key null 32 00:24:00.637 21:39:23 -- host/auth.sh@53 -- # local digest len file key 00:24:00.637 21:39:23 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:00.637 21:39:23 -- host/auth.sh@54 -- # local -A digests 00:24:00.637 21:39:23 -- host/auth.sh@56 -- # digest=null 00:24:00.637 21:39:23 -- host/auth.sh@56 -- # len=32 00:24:00.637 21:39:23 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:00.637 21:39:23 -- host/auth.sh@57 -- # key=56a19c8222a65099e67790474177fa0a 00:24:00.637 21:39:23 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:24:00.637 21:39:23 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.dPN 00:24:00.637 21:39:23 -- host/auth.sh@59 -- # format_dhchap_key 56a19c8222a65099e67790474177fa0a 0 00:24:00.637 21:39:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 56a19c8222a65099e67790474177fa0a 0 00:24:00.637 21:39:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:00.637 21:39:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:00.637 21:39:23 -- nvmf/common.sh@693 -- # key=56a19c8222a65099e67790474177fa0a 00:24:00.637 21:39:23 -- nvmf/common.sh@693 -- # digest=0 00:24:00.637 21:39:23 -- nvmf/common.sh@694 -- # python - 00:24:00.637 21:39:23 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.dPN 00:24:00.637 21:39:23 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.dPN 00:24:00.637 21:39:23 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.dPN 00:24:00.637 21:39:23 -- host/auth.sh@82 -- # gen_key null 48 00:24:00.638 21:39:23 -- host/auth.sh@53 -- # local digest len file key 00:24:00.638 21:39:23 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:00.638 21:39:23 -- host/auth.sh@54 -- # local -A digests 00:24:00.638 21:39:23 -- host/auth.sh@56 -- # digest=null 00:24:00.638 21:39:23 -- host/auth.sh@56 -- # len=48 00:24:00.638 21:39:23 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:00.638 21:39:23 -- host/auth.sh@57 -- # key=68b785e43b370e45966ac01540b6c4ae81c1eb1eb4dd6f57 00:24:00.638 21:39:23 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:24:00.638 21:39:23 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.E2H 00:24:00.638 21:39:23 -- host/auth.sh@59 -- # format_dhchap_key 68b785e43b370e45966ac01540b6c4ae81c1eb1eb4dd6f57 0 00:24:00.638 21:39:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 68b785e43b370e45966ac01540b6c4ae81c1eb1eb4dd6f57 0 00:24:00.638 21:39:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:00.638 21:39:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:00.638 21:39:23 -- nvmf/common.sh@693 -- # key=68b785e43b370e45966ac01540b6c4ae81c1eb1eb4dd6f57 00:24:00.638 21:39:23 -- nvmf/common.sh@693 -- # digest=0 00:24:00.638 21:39:23 -- nvmf/common.sh@694 -- # python - 00:24:00.638 21:39:23 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.E2H 00:24:00.638 21:39:23 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.E2H 00:24:00.638 21:39:23 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.E2H 00:24:00.638 21:39:23 -- host/auth.sh@83 -- # gen_key sha256 32 00:24:00.638 21:39:23 -- host/auth.sh@53 -- # local digest len file key 00:24:00.638 21:39:23 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:00.638 21:39:23 -- host/auth.sh@54 -- # local -A digests 00:24:00.638 21:39:23 -- host/auth.sh@56 -- # digest=sha256 00:24:00.638 21:39:23 -- host/auth.sh@56 -- # len=32 00:24:00.638 21:39:23 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:00.638 21:39:23 -- host/auth.sh@57 -- # key=c1b55059af831ad61f142c29ec3ff33e 00:24:00.638 21:39:23 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:24:00.638 21:39:23 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.Gex 00:24:00.638 21:39:23 -- host/auth.sh@59 -- # format_dhchap_key c1b55059af831ad61f142c29ec3ff33e 1 00:24:00.638 21:39:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 c1b55059af831ad61f142c29ec3ff33e 1 00:24:00.638 21:39:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:00.638 21:39:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:00.638 21:39:23 -- nvmf/common.sh@693 -- # key=c1b55059af831ad61f142c29ec3ff33e 00:24:00.638 21:39:23 -- nvmf/common.sh@693 -- # digest=1 00:24:00.638 21:39:23 -- nvmf/common.sh@694 -- # python - 00:24:00.638 21:39:23 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.Gex 00:24:00.638 21:39:23 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.Gex 00:24:00.638 21:39:23 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.Gex 00:24:00.897 21:39:23 -- host/auth.sh@84 -- # gen_key sha384 48 00:24:00.897 21:39:23 -- host/auth.sh@53 -- # local digest len file key 00:24:00.897 21:39:23 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:00.897 21:39:23 -- host/auth.sh@54 -- # local -A digests 00:24:00.897 21:39:23 -- host/auth.sh@56 -- # digest=sha384 00:24:00.897 21:39:23 -- host/auth.sh@56 -- # len=48 00:24:00.897 21:39:23 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:00.897 21:39:23 -- host/auth.sh@57 -- # key=3aafaba9521618498ac147a5e40c7f79fe916c1fd161a126 00:24:00.897 21:39:23 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:24:00.897 21:39:23 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.U9L 00:24:00.897 21:39:23 -- host/auth.sh@59 -- # format_dhchap_key 3aafaba9521618498ac147a5e40c7f79fe916c1fd161a126 2 00:24:00.897 21:39:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 3aafaba9521618498ac147a5e40c7f79fe916c1fd161a126 2 00:24:00.897 21:39:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:00.897 21:39:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:00.897 21:39:23 -- nvmf/common.sh@693 -- # key=3aafaba9521618498ac147a5e40c7f79fe916c1fd161a126 00:24:00.897 21:39:23 -- nvmf/common.sh@693 -- # digest=2 00:24:00.897 21:39:23 -- nvmf/common.sh@694 -- # python - 00:24:00.897 21:39:23 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.U9L 00:24:00.897 21:39:23 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.U9L 00:24:00.897 21:39:23 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.U9L 00:24:00.897 21:39:23 -- host/auth.sh@85 -- # gen_key sha512 64 00:24:00.897 21:39:23 -- host/auth.sh@53 -- # local digest len file key 00:24:00.897 21:39:23 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:00.897 21:39:23 -- host/auth.sh@54 -- # local -A digests 00:24:00.897 21:39:23 -- host/auth.sh@56 -- # digest=sha512 00:24:00.897 21:39:23 -- host/auth.sh@56 -- # len=64 00:24:00.897 21:39:23 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:00.897 21:39:23 -- host/auth.sh@57 -- # key=b5ef7654d83eff11c1f6a94f0407053e3b651fd869a77c6097cc476d2cbba054 00:24:00.897 21:39:23 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:24:00.897 21:39:23 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.vkC 00:24:00.897 21:39:23 -- host/auth.sh@59 -- # format_dhchap_key b5ef7654d83eff11c1f6a94f0407053e3b651fd869a77c6097cc476d2cbba054 3 00:24:00.897 21:39:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 b5ef7654d83eff11c1f6a94f0407053e3b651fd869a77c6097cc476d2cbba054 3 00:24:00.897 21:39:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:00.897 21:39:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:00.897 21:39:23 -- nvmf/common.sh@693 -- # key=b5ef7654d83eff11c1f6a94f0407053e3b651fd869a77c6097cc476d2cbba054 00:24:00.897 21:39:23 -- nvmf/common.sh@693 -- # digest=3 00:24:00.897 21:39:23 -- nvmf/common.sh@694 -- # python - 00:24:00.897 21:39:23 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.vkC 00:24:00.897 21:39:23 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.vkC 00:24:00.897 21:39:23 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.vkC 00:24:00.897 21:39:23 -- host/auth.sh@87 -- # waitforlisten 2973107 00:24:00.897 21:39:23 -- common/autotest_common.sh@817 -- # '[' -z 2973107 ']' 00:24:00.897 21:39:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.897 21:39:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:00.897 21:39:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.897 21:39:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:00.897 21:39:23 -- common/autotest_common.sh@10 -- # set +x 00:24:01.156 21:39:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:01.156 21:39:23 -- common/autotest_common.sh@850 -- # return 0 00:24:01.156 21:39:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:01.156 21:39:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dPN 00:24:01.156 21:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.156 21:39:23 -- common/autotest_common.sh@10 -- # set +x 00:24:01.156 21:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.156 21:39:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:01.156 21:39:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.E2H 00:24:01.156 21:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.156 21:39:23 -- common/autotest_common.sh@10 -- # set +x 00:24:01.156 21:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.156 21:39:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:01.156 21:39:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Gex 00:24:01.156 21:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.156 21:39:23 -- common/autotest_common.sh@10 -- # set +x 00:24:01.156 21:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.156 21:39:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:01.156 21:39:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.U9L 00:24:01.156 21:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.156 21:39:23 -- common/autotest_common.sh@10 -- # set +x 00:24:01.156 21:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.156 21:39:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:01.156 21:39:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.vkC 00:24:01.156 21:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.156 21:39:23 -- common/autotest_common.sh@10 -- # set +x 00:24:01.156 21:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.156 21:39:23 -- host/auth.sh@92 -- # nvmet_auth_init 00:24:01.156 21:39:23 -- host/auth.sh@35 -- # get_main_ns_ip 00:24:01.156 21:39:23 -- nvmf/common.sh@717 -- # local ip 00:24:01.156 21:39:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:01.156 21:39:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:01.156 21:39:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.156 21:39:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.156 21:39:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:01.156 21:39:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.156 21:39:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:01.156 21:39:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:01.156 21:39:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:01.156 21:39:23 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:01.156 21:39:23 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:01.156 21:39:23 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:24:01.156 21:39:23 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:01.156 21:39:23 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:01.156 21:39:23 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:01.156 21:39:23 -- nvmf/common.sh@628 -- # local block nvme 00:24:01.156 21:39:23 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:24:01.156 21:39:23 -- nvmf/common.sh@631 -- # modprobe nvmet 00:24:01.156 21:39:23 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:01.156 21:39:23 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:04.443 Waiting for block devices as requested 00:24:04.443 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:04.443 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:04.443 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:04.443 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:04.443 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:04.702 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:04.702 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:04.702 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:04.961 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:04.961 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:04.961 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:05.219 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:05.219 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:05.219 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:05.477 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:05.477 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:05.477 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:24:06.412 21:39:29 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:06.412 21:39:29 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:06.412 21:39:29 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:24:06.412 21:39:29 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:06.412 21:39:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:06.412 21:39:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:06.412 21:39:29 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:24:06.412 21:39:29 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:06.412 21:39:29 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:06.412 No valid GPT data, bailing 00:24:06.412 21:39:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:06.412 21:39:29 -- scripts/common.sh@391 -- # pt= 00:24:06.412 21:39:29 -- scripts/common.sh@392 -- # return 1 00:24:06.412 21:39:29 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:24:06.412 21:39:29 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:24:06.412 21:39:29 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:06.412 21:39:29 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:06.412 21:39:29 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:06.412 21:39:29 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:06.412 21:39:29 -- nvmf/common.sh@656 -- # echo 1 00:24:06.412 21:39:29 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:24:06.412 21:39:29 -- nvmf/common.sh@658 -- # echo 1 00:24:06.412 21:39:29 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:24:06.412 21:39:29 -- nvmf/common.sh@661 -- # echo tcp 00:24:06.412 21:39:29 -- nvmf/common.sh@662 -- # echo 4420 00:24:06.412 21:39:29 -- nvmf/common.sh@663 -- # echo ipv4 00:24:06.412 21:39:29 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:06.412 21:39:29 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:24:06.412 00:24:06.412 Discovery Log Number of Records 2, Generation counter 2 00:24:06.412 =====Discovery Log Entry 0====== 00:24:06.412 trtype: tcp 00:24:06.412 adrfam: ipv4 00:24:06.412 subtype: current discovery subsystem 00:24:06.412 treq: not specified, sq flow control disable supported 00:24:06.412 portid: 1 00:24:06.412 trsvcid: 4420 00:24:06.412 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:06.412 traddr: 10.0.0.1 00:24:06.412 eflags: none 00:24:06.412 sectype: none 00:24:06.412 =====Discovery Log Entry 1====== 00:24:06.412 trtype: tcp 00:24:06.412 adrfam: ipv4 00:24:06.412 subtype: nvme subsystem 00:24:06.412 treq: not specified, sq flow control disable supported 00:24:06.412 portid: 1 00:24:06.412 trsvcid: 4420 00:24:06.412 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:06.412 traddr: 10.0.0.1 00:24:06.412 eflags: none 00:24:06.413 sectype: none 00:24:06.413 21:39:29 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:06.413 21:39:29 -- host/auth.sh@37 -- # echo 0 00:24:06.413 21:39:29 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:06.413 21:39:29 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:06.413 21:39:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:06.413 21:39:29 -- host/auth.sh@44 -- # digest=sha256 00:24:06.413 21:39:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:06.413 21:39:29 -- host/auth.sh@44 -- # keyid=1 00:24:06.413 21:39:29 -- host/auth.sh@45 -- # key=DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:06.413 21:39:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:06.413 21:39:29 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:06.413 21:39:29 -- host/auth.sh@49 -- # echo DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:06.413 21:39:29 -- host/auth.sh@100 -- # IFS=, 00:24:06.413 21:39:29 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:24:06.413 21:39:29 -- host/auth.sh@100 -- # IFS=, 00:24:06.413 21:39:29 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:06.413 21:39:29 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:06.413 21:39:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:06.413 21:39:29 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:24:06.413 21:39:29 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:06.413 21:39:29 -- host/auth.sh@68 -- # keyid=1 00:24:06.413 21:39:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:06.413 21:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.413 21:39:29 -- common/autotest_common.sh@10 -- # set +x 00:24:06.413 21:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.413 21:39:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:06.413 21:39:29 -- nvmf/common.sh@717 -- # local ip 00:24:06.413 21:39:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:06.413 21:39:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:06.413 21:39:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.413 21:39:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.413 21:39:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:06.413 21:39:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.413 21:39:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:06.413 21:39:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:06.413 21:39:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:06.413 21:39:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:06.413 21:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.413 21:39:29 -- common/autotest_common.sh@10 -- # set +x 00:24:06.671 nvme0n1 00:24:06.671 21:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.671 21:39:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.671 21:39:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:06.671 21:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.671 21:39:29 -- common/autotest_common.sh@10 -- # set +x 00:24:06.671 21:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.671 21:39:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.671 21:39:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.671 21:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.671 21:39:29 -- common/autotest_common.sh@10 -- # set +x 00:24:06.671 21:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.671 21:39:29 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:06.671 21:39:29 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:06.671 21:39:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:06.671 21:39:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:06.671 21:39:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:06.671 21:39:29 -- host/auth.sh@44 -- # digest=sha256 00:24:06.671 21:39:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:06.671 21:39:29 -- host/auth.sh@44 -- # keyid=0 00:24:06.671 21:39:29 -- host/auth.sh@45 -- # key=DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:06.671 21:39:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:06.671 21:39:29 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:06.671 21:39:29 -- host/auth.sh@49 -- # echo DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:06.671 21:39:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:24:06.671 21:39:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:06.671 21:39:29 -- host/auth.sh@68 -- # digest=sha256 00:24:06.671 21:39:29 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:06.671 21:39:29 -- host/auth.sh@68 -- # keyid=0 00:24:06.671 21:39:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:06.671 21:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.671 21:39:29 -- common/autotest_common.sh@10 -- # set +x 00:24:06.671 21:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.671 21:39:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:06.671 21:39:29 -- nvmf/common.sh@717 -- # local ip 00:24:06.671 21:39:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:06.671 21:39:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:06.671 21:39:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.671 21:39:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.671 21:39:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:06.671 21:39:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.671 21:39:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:06.671 21:39:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:06.671 21:39:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:06.671 21:39:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:06.672 21:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.672 21:39:29 -- common/autotest_common.sh@10 -- # set +x 00:24:06.930 nvme0n1 00:24:06.930 21:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.930 21:39:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.930 21:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.930 21:39:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:06.930 21:39:29 -- common/autotest_common.sh@10 -- # set +x 00:24:06.930 21:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.930 21:39:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.930 21:39:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.930 21:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.930 21:39:29 -- common/autotest_common.sh@10 -- # set +x 00:24:06.930 21:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.930 21:39:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:06.930 21:39:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:06.930 21:39:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:06.930 21:39:29 -- host/auth.sh@44 -- # digest=sha256 00:24:06.930 21:39:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:06.930 21:39:29 -- host/auth.sh@44 -- # keyid=1 00:24:06.930 21:39:29 -- host/auth.sh@45 -- # key=DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:06.930 21:39:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:06.930 21:39:29 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:06.930 21:39:29 -- host/auth.sh@49 -- # echo DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:06.930 21:39:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:24:06.930 21:39:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:06.930 21:39:29 -- host/auth.sh@68 -- # digest=sha256 00:24:06.930 21:39:29 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:06.930 21:39:29 -- host/auth.sh@68 -- # keyid=1 00:24:06.930 21:39:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:06.930 21:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.930 21:39:29 -- common/autotest_common.sh@10 -- # set +x 00:24:06.930 21:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.930 21:39:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:06.930 21:39:29 -- nvmf/common.sh@717 -- # local ip 00:24:06.930 21:39:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:06.930 21:39:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:06.930 21:39:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.930 21:39:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.930 21:39:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:06.930 21:39:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.930 21:39:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:06.930 21:39:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:06.930 21:39:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:06.930 21:39:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:06.930 21:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.930 21:39:29 -- common/autotest_common.sh@10 -- # set +x 00:24:07.188 nvme0n1 00:24:07.188 21:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.188 21:39:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.188 21:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.188 21:39:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:07.188 21:39:29 -- common/autotest_common.sh@10 -- # set +x 00:24:07.188 21:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.188 21:39:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.188 21:39:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.188 21:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.188 21:39:29 -- common/autotest_common.sh@10 -- # set +x 00:24:07.188 21:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.188 21:39:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:07.188 21:39:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:07.188 21:39:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:07.188 21:39:29 -- host/auth.sh@44 -- # digest=sha256 00:24:07.188 21:39:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.188 21:39:29 -- host/auth.sh@44 -- # keyid=2 00:24:07.188 21:39:29 -- host/auth.sh@45 -- # key=DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:07.188 21:39:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:07.188 21:39:29 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:07.189 21:39:29 -- host/auth.sh@49 -- # echo DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:07.189 21:39:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:24:07.189 21:39:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:07.189 21:39:29 -- host/auth.sh@68 -- # digest=sha256 00:24:07.189 21:39:29 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:07.189 21:39:29 -- host/auth.sh@68 -- # keyid=2 00:24:07.189 21:39:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.189 21:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.189 21:39:29 -- common/autotest_common.sh@10 -- # set +x 00:24:07.189 21:39:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.189 21:39:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:07.189 21:39:29 -- nvmf/common.sh@717 -- # local ip 00:24:07.189 21:39:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.189 21:39:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.189 21:39:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.189 21:39:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.189 21:39:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:07.189 21:39:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.189 21:39:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:07.189 21:39:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:07.189 21:39:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:07.189 21:39:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:07.189 21:39:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.189 21:39:29 -- common/autotest_common.sh@10 -- # set +x 00:24:07.447 nvme0n1 00:24:07.447 21:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.447 21:39:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.447 21:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.447 21:39:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:07.447 21:39:30 -- common/autotest_common.sh@10 -- # set +x 00:24:07.447 21:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.447 21:39:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.447 21:39:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.447 21:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.447 21:39:30 -- common/autotest_common.sh@10 -- # set +x 00:24:07.447 21:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.447 21:39:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:07.447 21:39:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:07.447 21:39:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:07.447 21:39:30 -- host/auth.sh@44 -- # digest=sha256 00:24:07.447 21:39:30 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.447 21:39:30 -- host/auth.sh@44 -- # keyid=3 00:24:07.447 21:39:30 -- host/auth.sh@45 -- # key=DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:07.447 21:39:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:07.447 21:39:30 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:07.447 21:39:30 -- host/auth.sh@49 -- # echo DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:07.447 21:39:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:24:07.447 21:39:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:07.447 21:39:30 -- host/auth.sh@68 -- # digest=sha256 00:24:07.447 21:39:30 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:07.447 21:39:30 -- host/auth.sh@68 -- # keyid=3 00:24:07.447 21:39:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.447 21:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.447 21:39:30 -- common/autotest_common.sh@10 -- # set +x 00:24:07.447 21:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.447 21:39:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:07.447 21:39:30 -- nvmf/common.sh@717 -- # local ip 00:24:07.447 21:39:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.447 21:39:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.447 21:39:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.447 21:39:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.448 21:39:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:07.448 21:39:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.448 21:39:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:07.448 21:39:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:07.448 21:39:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:07.448 21:39:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:07.448 21:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.448 21:39:30 -- common/autotest_common.sh@10 -- # set +x 00:24:07.448 nvme0n1 00:24:07.448 21:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.448 21:39:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.448 21:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.448 21:39:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:07.448 21:39:30 -- common/autotest_common.sh@10 -- # set +x 00:24:07.448 21:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.707 21:39:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.707 21:39:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.707 21:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.707 21:39:30 -- common/autotest_common.sh@10 -- # set +x 00:24:07.707 21:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.707 21:39:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:07.707 21:39:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:07.707 21:39:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:07.707 21:39:30 -- host/auth.sh@44 -- # digest=sha256 00:24:07.707 21:39:30 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:07.707 21:39:30 -- host/auth.sh@44 -- # keyid=4 00:24:07.707 21:39:30 -- host/auth.sh@45 -- # key=DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:07.707 21:39:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:07.707 21:39:30 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:07.707 21:39:30 -- host/auth.sh@49 -- # echo DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:07.707 21:39:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:24:07.707 21:39:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:07.707 21:39:30 -- host/auth.sh@68 -- # digest=sha256 00:24:07.707 21:39:30 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:07.707 21:39:30 -- host/auth.sh@68 -- # keyid=4 00:24:07.707 21:39:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:07.707 21:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.707 21:39:30 -- common/autotest_common.sh@10 -- # set +x 00:24:07.707 21:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.707 21:39:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:07.707 21:39:30 -- nvmf/common.sh@717 -- # local ip 00:24:07.707 21:39:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.707 21:39:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.707 21:39:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.707 21:39:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.707 21:39:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:07.707 21:39:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.707 21:39:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:07.707 21:39:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:07.707 21:39:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:07.707 21:39:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:07.707 21:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.707 21:39:30 -- common/autotest_common.sh@10 -- # set +x 00:24:07.707 nvme0n1 00:24:07.707 21:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.707 21:39:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:07.707 21:39:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.707 21:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.707 21:39:30 -- common/autotest_common.sh@10 -- # set +x 00:24:07.707 21:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.707 21:39:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.707 21:39:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.707 21:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.707 21:39:30 -- common/autotest_common.sh@10 -- # set +x 00:24:07.707 21:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.707 21:39:30 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:07.707 21:39:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:07.707 21:39:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:07.707 21:39:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:07.707 21:39:30 -- host/auth.sh@44 -- # digest=sha256 00:24:07.707 21:39:30 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:07.707 21:39:30 -- host/auth.sh@44 -- # keyid=0 00:24:07.707 21:39:30 -- host/auth.sh@45 -- # key=DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:07.707 21:39:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:07.707 21:39:30 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:07.707 21:39:30 -- host/auth.sh@49 -- # echo DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:07.707 21:39:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:24:07.707 21:39:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:07.707 21:39:30 -- host/auth.sh@68 -- # digest=sha256 00:24:07.707 21:39:30 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:07.707 21:39:30 -- host/auth.sh@68 -- # keyid=0 00:24:07.707 21:39:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:07.707 21:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.707 21:39:30 -- common/autotest_common.sh@10 -- # set +x 00:24:07.707 21:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.966 21:39:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:07.966 21:39:30 -- nvmf/common.sh@717 -- # local ip 00:24:07.966 21:39:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.966 21:39:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.966 21:39:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.966 21:39:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.966 21:39:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:07.966 21:39:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.966 21:39:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:07.966 21:39:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:07.966 21:39:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:07.966 21:39:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:07.966 21:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.966 21:39:30 -- common/autotest_common.sh@10 -- # set +x 00:24:07.966 nvme0n1 00:24:07.966 21:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.966 21:39:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:07.966 21:39:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.966 21:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.966 21:39:30 -- common/autotest_common.sh@10 -- # set +x 00:24:07.966 21:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.966 21:39:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.966 21:39:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.966 21:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.966 21:39:30 -- common/autotest_common.sh@10 -- # set +x 00:24:07.966 21:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.966 21:39:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:07.966 21:39:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:07.966 21:39:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:07.966 21:39:30 -- host/auth.sh@44 -- # digest=sha256 00:24:07.966 21:39:30 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:07.966 21:39:30 -- host/auth.sh@44 -- # keyid=1 00:24:07.966 21:39:30 -- host/auth.sh@45 -- # key=DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:07.966 21:39:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:07.966 21:39:30 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:07.967 21:39:30 -- host/auth.sh@49 -- # echo DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:07.967 21:39:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:24:07.967 21:39:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:07.967 21:39:30 -- host/auth.sh@68 -- # digest=sha256 00:24:07.967 21:39:30 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:07.967 21:39:30 -- host/auth.sh@68 -- # keyid=1 00:24:07.967 21:39:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:07.967 21:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.967 21:39:30 -- common/autotest_common.sh@10 -- # set +x 00:24:07.967 21:39:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.967 21:39:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:07.967 21:39:30 -- nvmf/common.sh@717 -- # local ip 00:24:07.967 21:39:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.967 21:39:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.967 21:39:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.967 21:39:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.967 21:39:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:07.967 21:39:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.967 21:39:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:07.967 21:39:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:07.967 21:39:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:07.967 21:39:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:07.967 21:39:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.967 21:39:30 -- common/autotest_common.sh@10 -- # set +x 00:24:08.226 nvme0n1 00:24:08.226 21:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.226 21:39:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:08.226 21:39:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.226 21:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.226 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:24:08.226 21:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.226 21:39:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.226 21:39:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.226 21:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.226 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:24:08.226 21:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.226 21:39:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:08.226 21:39:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:08.226 21:39:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:08.226 21:39:31 -- host/auth.sh@44 -- # digest=sha256 00:24:08.226 21:39:31 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:08.226 21:39:31 -- host/auth.sh@44 -- # keyid=2 00:24:08.226 21:39:31 -- host/auth.sh@45 -- # key=DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:08.226 21:39:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:08.226 21:39:31 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:08.226 21:39:31 -- host/auth.sh@49 -- # echo DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:08.226 21:39:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:24:08.226 21:39:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:08.226 21:39:31 -- host/auth.sh@68 -- # digest=sha256 00:24:08.226 21:39:31 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:08.226 21:39:31 -- host/auth.sh@68 -- # keyid=2 00:24:08.226 21:39:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:08.226 21:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.226 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:24:08.226 21:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.226 21:39:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:08.226 21:39:31 -- nvmf/common.sh@717 -- # local ip 00:24:08.226 21:39:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:08.226 21:39:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:08.226 21:39:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.226 21:39:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.226 21:39:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:08.226 21:39:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.226 21:39:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:08.226 21:39:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:08.226 21:39:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:08.226 21:39:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:08.226 21:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.226 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:24:08.485 nvme0n1 00:24:08.485 21:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.485 21:39:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.485 21:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.485 21:39:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:08.485 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:24:08.485 21:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.485 21:39:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.485 21:39:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.485 21:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.485 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:24:08.485 21:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.485 21:39:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:08.485 21:39:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:08.485 21:39:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:08.485 21:39:31 -- host/auth.sh@44 -- # digest=sha256 00:24:08.485 21:39:31 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:08.485 21:39:31 -- host/auth.sh@44 -- # keyid=3 00:24:08.485 21:39:31 -- host/auth.sh@45 -- # key=DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:08.485 21:39:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:08.485 21:39:31 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:08.485 21:39:31 -- host/auth.sh@49 -- # echo DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:08.485 21:39:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:24:08.485 21:39:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:08.485 21:39:31 -- host/auth.sh@68 -- # digest=sha256 00:24:08.485 21:39:31 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:08.485 21:39:31 -- host/auth.sh@68 -- # keyid=3 00:24:08.485 21:39:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:08.485 21:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.485 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:24:08.485 21:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.485 21:39:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:08.485 21:39:31 -- nvmf/common.sh@717 -- # local ip 00:24:08.485 21:39:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:08.485 21:39:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:08.485 21:39:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.485 21:39:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.485 21:39:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:08.485 21:39:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.485 21:39:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:08.485 21:39:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:08.485 21:39:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:08.485 21:39:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:08.485 21:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.485 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:24:08.744 nvme0n1 00:24:08.744 21:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.744 21:39:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:08.744 21:39:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.744 21:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.744 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:24:08.744 21:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.744 21:39:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.744 21:39:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.744 21:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.744 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:24:08.744 21:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.744 21:39:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:08.744 21:39:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:08.744 21:39:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:08.744 21:39:31 -- host/auth.sh@44 -- # digest=sha256 00:24:08.744 21:39:31 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:08.744 21:39:31 -- host/auth.sh@44 -- # keyid=4 00:24:08.744 21:39:31 -- host/auth.sh@45 -- # key=DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:08.744 21:39:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:08.744 21:39:31 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:08.744 21:39:31 -- host/auth.sh@49 -- # echo DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:08.744 21:39:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:24:08.744 21:39:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:08.744 21:39:31 -- host/auth.sh@68 -- # digest=sha256 00:24:08.744 21:39:31 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:08.744 21:39:31 -- host/auth.sh@68 -- # keyid=4 00:24:08.744 21:39:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:08.744 21:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.744 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:24:08.744 21:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.744 21:39:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:08.744 21:39:31 -- nvmf/common.sh@717 -- # local ip 00:24:08.744 21:39:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:08.744 21:39:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:08.744 21:39:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.744 21:39:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.744 21:39:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:08.744 21:39:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.744 21:39:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:08.744 21:39:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:08.744 21:39:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:08.744 21:39:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:08.744 21:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.744 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:24:09.003 nvme0n1 00:24:09.003 21:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.003 21:39:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.003 21:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.003 21:39:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:09.003 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:24:09.003 21:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.003 21:39:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.003 21:39:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.003 21:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.003 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:24:09.003 21:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.003 21:39:31 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:09.003 21:39:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:09.003 21:39:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:09.003 21:39:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:09.003 21:39:31 -- host/auth.sh@44 -- # digest=sha256 00:24:09.003 21:39:31 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:09.003 21:39:31 -- host/auth.sh@44 -- # keyid=0 00:24:09.003 21:39:31 -- host/auth.sh@45 -- # key=DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:09.003 21:39:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:09.003 21:39:31 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:09.003 21:39:31 -- host/auth.sh@49 -- # echo DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:09.003 21:39:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:24:09.003 21:39:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:09.003 21:39:31 -- host/auth.sh@68 -- # digest=sha256 00:24:09.003 21:39:31 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:09.003 21:39:31 -- host/auth.sh@68 -- # keyid=0 00:24:09.003 21:39:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:09.003 21:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.003 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:24:09.003 21:39:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.003 21:39:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:09.003 21:39:31 -- nvmf/common.sh@717 -- # local ip 00:24:09.003 21:39:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:09.003 21:39:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:09.003 21:39:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.003 21:39:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.003 21:39:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:09.003 21:39:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.003 21:39:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:09.003 21:39:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:09.003 21:39:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:09.003 21:39:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:09.003 21:39:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.003 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:24:09.262 nvme0n1 00:24:09.262 21:39:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.262 21:39:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.262 21:39:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.262 21:39:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:09.262 21:39:32 -- common/autotest_common.sh@10 -- # set +x 00:24:09.262 21:39:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.262 21:39:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.262 21:39:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.262 21:39:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.262 21:39:32 -- common/autotest_common.sh@10 -- # set +x 00:24:09.262 21:39:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.262 21:39:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:09.262 21:39:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:09.262 21:39:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:09.262 21:39:32 -- host/auth.sh@44 -- # digest=sha256 00:24:09.262 21:39:32 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:09.262 21:39:32 -- host/auth.sh@44 -- # keyid=1 00:24:09.262 21:39:32 -- host/auth.sh@45 -- # key=DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:09.262 21:39:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:09.262 21:39:32 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:09.262 21:39:32 -- host/auth.sh@49 -- # echo DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:09.262 21:39:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:24:09.262 21:39:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:09.262 21:39:32 -- host/auth.sh@68 -- # digest=sha256 00:24:09.262 21:39:32 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:09.262 21:39:32 -- host/auth.sh@68 -- # keyid=1 00:24:09.262 21:39:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:09.262 21:39:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.262 21:39:32 -- common/autotest_common.sh@10 -- # set +x 00:24:09.262 21:39:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.262 21:39:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:09.262 21:39:32 -- nvmf/common.sh@717 -- # local ip 00:24:09.262 21:39:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:09.262 21:39:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:09.262 21:39:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.262 21:39:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.262 21:39:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:09.262 21:39:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.262 21:39:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:09.262 21:39:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:09.262 21:39:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:09.262 21:39:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:09.262 21:39:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.262 21:39:32 -- common/autotest_common.sh@10 -- # set +x 00:24:09.521 nvme0n1 00:24:09.521 21:39:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.521 21:39:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.521 21:39:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:09.521 21:39:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.521 21:39:32 -- common/autotest_common.sh@10 -- # set +x 00:24:09.521 21:39:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.521 21:39:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.521 21:39:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.521 21:39:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.521 21:39:32 -- common/autotest_common.sh@10 -- # set +x 00:24:09.780 21:39:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.780 21:39:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:09.780 21:39:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:09.780 21:39:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:09.780 21:39:32 -- host/auth.sh@44 -- # digest=sha256 00:24:09.780 21:39:32 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:09.780 21:39:32 -- host/auth.sh@44 -- # keyid=2 00:24:09.780 21:39:32 -- host/auth.sh@45 -- # key=DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:09.780 21:39:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:09.780 21:39:32 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:09.780 21:39:32 -- host/auth.sh@49 -- # echo DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:09.780 21:39:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:24:09.780 21:39:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:09.780 21:39:32 -- host/auth.sh@68 -- # digest=sha256 00:24:09.780 21:39:32 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:09.780 21:39:32 -- host/auth.sh@68 -- # keyid=2 00:24:09.780 21:39:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:09.780 21:39:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.780 21:39:32 -- common/autotest_common.sh@10 -- # set +x 00:24:09.780 21:39:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.780 21:39:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:09.780 21:39:32 -- nvmf/common.sh@717 -- # local ip 00:24:09.780 21:39:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:09.780 21:39:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:09.780 21:39:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.780 21:39:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.780 21:39:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:09.780 21:39:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.780 21:39:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:09.780 21:39:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:09.780 21:39:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:09.780 21:39:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:09.780 21:39:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.780 21:39:32 -- common/autotest_common.sh@10 -- # set +x 00:24:10.039 nvme0n1 00:24:10.039 21:39:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.039 21:39:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.039 21:39:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.039 21:39:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:10.039 21:39:32 -- common/autotest_common.sh@10 -- # set +x 00:24:10.039 21:39:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.039 21:39:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.039 21:39:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.039 21:39:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.039 21:39:32 -- common/autotest_common.sh@10 -- # set +x 00:24:10.039 21:39:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.039 21:39:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:10.039 21:39:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:10.039 21:39:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:10.039 21:39:32 -- host/auth.sh@44 -- # digest=sha256 00:24:10.039 21:39:32 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:10.039 21:39:32 -- host/auth.sh@44 -- # keyid=3 00:24:10.039 21:39:32 -- host/auth.sh@45 -- # key=DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:10.039 21:39:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:10.039 21:39:32 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:10.039 21:39:32 -- host/auth.sh@49 -- # echo DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:10.039 21:39:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:24:10.039 21:39:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:10.039 21:39:32 -- host/auth.sh@68 -- # digest=sha256 00:24:10.039 21:39:32 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:10.039 21:39:32 -- host/auth.sh@68 -- # keyid=3 00:24:10.039 21:39:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:10.039 21:39:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.039 21:39:32 -- common/autotest_common.sh@10 -- # set +x 00:24:10.039 21:39:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.039 21:39:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:10.039 21:39:32 -- nvmf/common.sh@717 -- # local ip 00:24:10.039 21:39:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:10.039 21:39:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:10.039 21:39:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.039 21:39:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.039 21:39:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:10.039 21:39:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.039 21:39:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:10.039 21:39:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:10.039 21:39:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:10.039 21:39:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:10.039 21:39:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.039 21:39:32 -- common/autotest_common.sh@10 -- # set +x 00:24:10.301 nvme0n1 00:24:10.301 21:39:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.301 21:39:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.301 21:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.301 21:39:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:10.301 21:39:33 -- common/autotest_common.sh@10 -- # set +x 00:24:10.301 21:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.301 21:39:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.301 21:39:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.301 21:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.301 21:39:33 -- common/autotest_common.sh@10 -- # set +x 00:24:10.301 21:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.301 21:39:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:10.301 21:39:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:10.301 21:39:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:10.301 21:39:33 -- host/auth.sh@44 -- # digest=sha256 00:24:10.301 21:39:33 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:10.301 21:39:33 -- host/auth.sh@44 -- # keyid=4 00:24:10.301 21:39:33 -- host/auth.sh@45 -- # key=DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:10.301 21:39:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:10.301 21:39:33 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:10.301 21:39:33 -- host/auth.sh@49 -- # echo DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:10.301 21:39:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:24:10.301 21:39:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:10.301 21:39:33 -- host/auth.sh@68 -- # digest=sha256 00:24:10.301 21:39:33 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:10.301 21:39:33 -- host/auth.sh@68 -- # keyid=4 00:24:10.301 21:39:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:10.301 21:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.301 21:39:33 -- common/autotest_common.sh@10 -- # set +x 00:24:10.301 21:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.301 21:39:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:10.301 21:39:33 -- nvmf/common.sh@717 -- # local ip 00:24:10.302 21:39:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:10.302 21:39:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:10.302 21:39:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.302 21:39:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.302 21:39:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:10.302 21:39:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.302 21:39:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:10.302 21:39:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:10.302 21:39:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:10.302 21:39:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:10.302 21:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.302 21:39:33 -- common/autotest_common.sh@10 -- # set +x 00:24:10.561 nvme0n1 00:24:10.561 21:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.561 21:39:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.561 21:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.561 21:39:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:10.562 21:39:33 -- common/autotest_common.sh@10 -- # set +x 00:24:10.562 21:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.562 21:39:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.562 21:39:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.562 21:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.562 21:39:33 -- common/autotest_common.sh@10 -- # set +x 00:24:10.562 21:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.562 21:39:33 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:10.562 21:39:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:10.562 21:39:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:10.562 21:39:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:10.562 21:39:33 -- host/auth.sh@44 -- # digest=sha256 00:24:10.562 21:39:33 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:10.562 21:39:33 -- host/auth.sh@44 -- # keyid=0 00:24:10.562 21:39:33 -- host/auth.sh@45 -- # key=DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:10.562 21:39:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:10.562 21:39:33 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:10.562 21:39:33 -- host/auth.sh@49 -- # echo DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:10.562 21:39:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:24:10.562 21:39:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:10.562 21:39:33 -- host/auth.sh@68 -- # digest=sha256 00:24:10.562 21:39:33 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:10.562 21:39:33 -- host/auth.sh@68 -- # keyid=0 00:24:10.562 21:39:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:10.562 21:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.562 21:39:33 -- common/autotest_common.sh@10 -- # set +x 00:24:10.562 21:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.562 21:39:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:10.562 21:39:33 -- nvmf/common.sh@717 -- # local ip 00:24:10.562 21:39:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:10.562 21:39:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:10.562 21:39:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.562 21:39:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.562 21:39:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:10.562 21:39:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.562 21:39:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:10.562 21:39:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:10.562 21:39:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:10.562 21:39:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:10.562 21:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.562 21:39:33 -- common/autotest_common.sh@10 -- # set +x 00:24:11.130 nvme0n1 00:24:11.130 21:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.130 21:39:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.130 21:39:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:11.130 21:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.130 21:39:33 -- common/autotest_common.sh@10 -- # set +x 00:24:11.130 21:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.130 21:39:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.130 21:39:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.130 21:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.130 21:39:33 -- common/autotest_common.sh@10 -- # set +x 00:24:11.130 21:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.130 21:39:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:11.130 21:39:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:11.130 21:39:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:11.130 21:39:33 -- host/auth.sh@44 -- # digest=sha256 00:24:11.130 21:39:33 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:11.130 21:39:33 -- host/auth.sh@44 -- # keyid=1 00:24:11.130 21:39:33 -- host/auth.sh@45 -- # key=DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:11.130 21:39:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:11.130 21:39:33 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:11.130 21:39:33 -- host/auth.sh@49 -- # echo DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:11.130 21:39:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:24:11.130 21:39:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:11.130 21:39:33 -- host/auth.sh@68 -- # digest=sha256 00:24:11.130 21:39:33 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:11.130 21:39:33 -- host/auth.sh@68 -- # keyid=1 00:24:11.130 21:39:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:11.130 21:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.130 21:39:33 -- common/autotest_common.sh@10 -- # set +x 00:24:11.130 21:39:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.130 21:39:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:11.130 21:39:33 -- nvmf/common.sh@717 -- # local ip 00:24:11.130 21:39:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:11.130 21:39:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:11.130 21:39:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.130 21:39:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.130 21:39:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:11.130 21:39:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.130 21:39:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:11.130 21:39:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:11.130 21:39:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:11.130 21:39:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:11.130 21:39:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.130 21:39:33 -- common/autotest_common.sh@10 -- # set +x 00:24:11.388 nvme0n1 00:24:11.388 21:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.388 21:39:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.388 21:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.388 21:39:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:11.388 21:39:34 -- common/autotest_common.sh@10 -- # set +x 00:24:11.388 21:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.388 21:39:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.388 21:39:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.388 21:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.388 21:39:34 -- common/autotest_common.sh@10 -- # set +x 00:24:11.388 21:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.388 21:39:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:11.388 21:39:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:11.388 21:39:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:11.388 21:39:34 -- host/auth.sh@44 -- # digest=sha256 00:24:11.388 21:39:34 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:11.388 21:39:34 -- host/auth.sh@44 -- # keyid=2 00:24:11.388 21:39:34 -- host/auth.sh@45 -- # key=DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:11.388 21:39:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:11.388 21:39:34 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:11.388 21:39:34 -- host/auth.sh@49 -- # echo DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:11.388 21:39:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:24:11.388 21:39:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:11.388 21:39:34 -- host/auth.sh@68 -- # digest=sha256 00:24:11.388 21:39:34 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:11.388 21:39:34 -- host/auth.sh@68 -- # keyid=2 00:24:11.388 21:39:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:11.388 21:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.388 21:39:34 -- common/autotest_common.sh@10 -- # set +x 00:24:11.646 21:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.646 21:39:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:11.646 21:39:34 -- nvmf/common.sh@717 -- # local ip 00:24:11.646 21:39:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:11.646 21:39:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:11.646 21:39:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.646 21:39:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.646 21:39:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:11.646 21:39:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.646 21:39:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:11.646 21:39:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:11.646 21:39:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:11.646 21:39:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:11.646 21:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.646 21:39:34 -- common/autotest_common.sh@10 -- # set +x 00:24:11.904 nvme0n1 00:24:11.904 21:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.904 21:39:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.904 21:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.904 21:39:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:11.904 21:39:34 -- common/autotest_common.sh@10 -- # set +x 00:24:11.904 21:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.904 21:39:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.904 21:39:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.904 21:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.904 21:39:34 -- common/autotest_common.sh@10 -- # set +x 00:24:11.904 21:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.904 21:39:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:11.904 21:39:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:11.904 21:39:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:11.904 21:39:34 -- host/auth.sh@44 -- # digest=sha256 00:24:11.904 21:39:34 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:11.904 21:39:34 -- host/auth.sh@44 -- # keyid=3 00:24:11.904 21:39:34 -- host/auth.sh@45 -- # key=DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:11.904 21:39:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:11.904 21:39:34 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:11.904 21:39:34 -- host/auth.sh@49 -- # echo DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:11.904 21:39:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:24:11.904 21:39:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:11.904 21:39:34 -- host/auth.sh@68 -- # digest=sha256 00:24:11.904 21:39:34 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:11.904 21:39:34 -- host/auth.sh@68 -- # keyid=3 00:24:11.904 21:39:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:11.904 21:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.904 21:39:34 -- common/autotest_common.sh@10 -- # set +x 00:24:11.904 21:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.904 21:39:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:11.904 21:39:34 -- nvmf/common.sh@717 -- # local ip 00:24:11.904 21:39:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:11.904 21:39:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:11.904 21:39:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.904 21:39:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.904 21:39:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:11.904 21:39:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.904 21:39:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:11.904 21:39:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:11.904 21:39:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:11.904 21:39:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:11.904 21:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.904 21:39:34 -- common/autotest_common.sh@10 -- # set +x 00:24:12.470 nvme0n1 00:24:12.470 21:39:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.470 21:39:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:12.470 21:39:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.470 21:39:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.470 21:39:35 -- common/autotest_common.sh@10 -- # set +x 00:24:12.470 21:39:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.470 21:39:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.470 21:39:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.470 21:39:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.470 21:39:35 -- common/autotest_common.sh@10 -- # set +x 00:24:12.470 21:39:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.470 21:39:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:12.470 21:39:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:12.470 21:39:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:12.470 21:39:35 -- host/auth.sh@44 -- # digest=sha256 00:24:12.470 21:39:35 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:12.470 21:39:35 -- host/auth.sh@44 -- # keyid=4 00:24:12.470 21:39:35 -- host/auth.sh@45 -- # key=DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:12.470 21:39:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:12.470 21:39:35 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:12.470 21:39:35 -- host/auth.sh@49 -- # echo DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:12.470 21:39:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:24:12.470 21:39:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:12.470 21:39:35 -- host/auth.sh@68 -- # digest=sha256 00:24:12.470 21:39:35 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:12.470 21:39:35 -- host/auth.sh@68 -- # keyid=4 00:24:12.470 21:39:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:12.470 21:39:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.470 21:39:35 -- common/autotest_common.sh@10 -- # set +x 00:24:12.470 21:39:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.470 21:39:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:12.470 21:39:35 -- nvmf/common.sh@717 -- # local ip 00:24:12.470 21:39:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:12.470 21:39:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:12.470 21:39:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.470 21:39:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.470 21:39:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:12.470 21:39:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.470 21:39:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:12.470 21:39:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:12.470 21:39:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:12.470 21:39:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:12.470 21:39:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.470 21:39:35 -- common/autotest_common.sh@10 -- # set +x 00:24:12.728 nvme0n1 00:24:12.729 21:39:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.729 21:39:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.729 21:39:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.729 21:39:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:12.729 21:39:35 -- common/autotest_common.sh@10 -- # set +x 00:24:12.729 21:39:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.729 21:39:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.729 21:39:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.729 21:39:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.729 21:39:35 -- common/autotest_common.sh@10 -- # set +x 00:24:12.729 21:39:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.729 21:39:35 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:12.729 21:39:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:12.729 21:39:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:12.729 21:39:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:12.729 21:39:35 -- host/auth.sh@44 -- # digest=sha256 00:24:12.729 21:39:35 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:12.729 21:39:35 -- host/auth.sh@44 -- # keyid=0 00:24:12.729 21:39:35 -- host/auth.sh@45 -- # key=DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:12.729 21:39:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:12.729 21:39:35 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:12.729 21:39:35 -- host/auth.sh@49 -- # echo DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:12.729 21:39:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:24:12.729 21:39:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:12.729 21:39:35 -- host/auth.sh@68 -- # digest=sha256 00:24:12.729 21:39:35 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:12.729 21:39:35 -- host/auth.sh@68 -- # keyid=0 00:24:12.729 21:39:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:12.729 21:39:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.729 21:39:35 -- common/autotest_common.sh@10 -- # set +x 00:24:12.729 21:39:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.729 21:39:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:12.729 21:39:35 -- nvmf/common.sh@717 -- # local ip 00:24:12.729 21:39:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:12.729 21:39:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:12.729 21:39:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.729 21:39:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.729 21:39:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:12.729 21:39:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.729 21:39:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:12.729 21:39:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:12.729 21:39:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:12.729 21:39:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:12.729 21:39:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.729 21:39:35 -- common/autotest_common.sh@10 -- # set +x 00:24:13.295 nvme0n1 00:24:13.295 21:39:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.295 21:39:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.295 21:39:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:13.295 21:39:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.295 21:39:36 -- common/autotest_common.sh@10 -- # set +x 00:24:13.295 21:39:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.295 21:39:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.295 21:39:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.295 21:39:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.295 21:39:36 -- common/autotest_common.sh@10 -- # set +x 00:24:13.553 21:39:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.553 21:39:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:13.553 21:39:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:13.553 21:39:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:13.553 21:39:36 -- host/auth.sh@44 -- # digest=sha256 00:24:13.553 21:39:36 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:13.553 21:39:36 -- host/auth.sh@44 -- # keyid=1 00:24:13.553 21:39:36 -- host/auth.sh@45 -- # key=DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:13.553 21:39:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:13.553 21:39:36 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:13.553 21:39:36 -- host/auth.sh@49 -- # echo DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:13.553 21:39:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:24:13.553 21:39:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:13.553 21:39:36 -- host/auth.sh@68 -- # digest=sha256 00:24:13.553 21:39:36 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:13.553 21:39:36 -- host/auth.sh@68 -- # keyid=1 00:24:13.553 21:39:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:13.553 21:39:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.553 21:39:36 -- common/autotest_common.sh@10 -- # set +x 00:24:13.553 21:39:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.553 21:39:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:13.553 21:39:36 -- nvmf/common.sh@717 -- # local ip 00:24:13.553 21:39:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:13.553 21:39:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:13.553 21:39:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.553 21:39:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.553 21:39:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:13.553 21:39:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.553 21:39:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:13.553 21:39:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:13.553 21:39:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:13.553 21:39:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:13.553 21:39:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.553 21:39:36 -- common/autotest_common.sh@10 -- # set +x 00:24:14.119 nvme0n1 00:24:14.119 21:39:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.119 21:39:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.119 21:39:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.119 21:39:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:14.119 21:39:36 -- common/autotest_common.sh@10 -- # set +x 00:24:14.119 21:39:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.119 21:39:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.119 21:39:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.119 21:39:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.119 21:39:36 -- common/autotest_common.sh@10 -- # set +x 00:24:14.119 21:39:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.119 21:39:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:14.119 21:39:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:14.119 21:39:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:14.119 21:39:36 -- host/auth.sh@44 -- # digest=sha256 00:24:14.119 21:39:36 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:14.119 21:39:36 -- host/auth.sh@44 -- # keyid=2 00:24:14.119 21:39:36 -- host/auth.sh@45 -- # key=DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:14.119 21:39:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:14.119 21:39:36 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:14.119 21:39:36 -- host/auth.sh@49 -- # echo DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:14.119 21:39:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:24:14.119 21:39:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:14.119 21:39:36 -- host/auth.sh@68 -- # digest=sha256 00:24:14.119 21:39:36 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:14.119 21:39:36 -- host/auth.sh@68 -- # keyid=2 00:24:14.119 21:39:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:14.119 21:39:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.119 21:39:36 -- common/autotest_common.sh@10 -- # set +x 00:24:14.119 21:39:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.119 21:39:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:14.119 21:39:36 -- nvmf/common.sh@717 -- # local ip 00:24:14.119 21:39:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.119 21:39:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.119 21:39:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.119 21:39:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.119 21:39:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:14.119 21:39:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.119 21:39:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:14.119 21:39:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:14.119 21:39:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:14.119 21:39:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:14.119 21:39:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.119 21:39:36 -- common/autotest_common.sh@10 -- # set +x 00:24:14.685 nvme0n1 00:24:14.685 21:39:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.685 21:39:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.685 21:39:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:14.685 21:39:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.685 21:39:37 -- common/autotest_common.sh@10 -- # set +x 00:24:14.685 21:39:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.685 21:39:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.685 21:39:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.685 21:39:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.685 21:39:37 -- common/autotest_common.sh@10 -- # set +x 00:24:14.685 21:39:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.685 21:39:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:14.685 21:39:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:14.685 21:39:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:14.685 21:39:37 -- host/auth.sh@44 -- # digest=sha256 00:24:14.685 21:39:37 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:14.685 21:39:37 -- host/auth.sh@44 -- # keyid=3 00:24:14.685 21:39:37 -- host/auth.sh@45 -- # key=DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:14.685 21:39:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:14.685 21:39:37 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:14.685 21:39:37 -- host/auth.sh@49 -- # echo DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:14.685 21:39:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:24:14.685 21:39:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:14.685 21:39:37 -- host/auth.sh@68 -- # digest=sha256 00:24:14.685 21:39:37 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:14.685 21:39:37 -- host/auth.sh@68 -- # keyid=3 00:24:14.685 21:39:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:14.685 21:39:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.685 21:39:37 -- common/autotest_common.sh@10 -- # set +x 00:24:14.685 21:39:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.685 21:39:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:14.685 21:39:37 -- nvmf/common.sh@717 -- # local ip 00:24:14.685 21:39:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.685 21:39:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.685 21:39:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.685 21:39:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.685 21:39:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:14.685 21:39:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.685 21:39:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:14.685 21:39:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:14.685 21:39:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:14.685 21:39:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:14.686 21:39:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.686 21:39:37 -- common/autotest_common.sh@10 -- # set +x 00:24:15.251 nvme0n1 00:24:15.251 21:39:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.251 21:39:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.251 21:39:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.251 21:39:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:15.251 21:39:38 -- common/autotest_common.sh@10 -- # set +x 00:24:15.251 21:39:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.251 21:39:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.251 21:39:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.251 21:39:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.251 21:39:38 -- common/autotest_common.sh@10 -- # set +x 00:24:15.251 21:39:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.251 21:39:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:15.251 21:39:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:15.251 21:39:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:15.251 21:39:38 -- host/auth.sh@44 -- # digest=sha256 00:24:15.251 21:39:38 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:15.251 21:39:38 -- host/auth.sh@44 -- # keyid=4 00:24:15.251 21:39:38 -- host/auth.sh@45 -- # key=DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:15.251 21:39:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:15.251 21:39:38 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:15.251 21:39:38 -- host/auth.sh@49 -- # echo DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:15.251 21:39:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:24:15.251 21:39:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:15.251 21:39:38 -- host/auth.sh@68 -- # digest=sha256 00:24:15.251 21:39:38 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:15.251 21:39:38 -- host/auth.sh@68 -- # keyid=4 00:24:15.251 21:39:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:15.251 21:39:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.251 21:39:38 -- common/autotest_common.sh@10 -- # set +x 00:24:15.251 21:39:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.251 21:39:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:15.251 21:39:38 -- nvmf/common.sh@717 -- # local ip 00:24:15.251 21:39:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:15.251 21:39:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:15.251 21:39:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.251 21:39:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.251 21:39:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:15.251 21:39:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.251 21:39:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:15.251 21:39:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:15.251 21:39:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:15.251 21:39:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:15.251 21:39:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.251 21:39:38 -- common/autotest_common.sh@10 -- # set +x 00:24:15.818 nvme0n1 00:24:15.818 21:39:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.818 21:39:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.818 21:39:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:15.818 21:39:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.818 21:39:38 -- common/autotest_common.sh@10 -- # set +x 00:24:15.818 21:39:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.818 21:39:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.818 21:39:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.818 21:39:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.818 21:39:38 -- common/autotest_common.sh@10 -- # set +x 00:24:15.818 21:39:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.818 21:39:38 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:15.818 21:39:38 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:15.818 21:39:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:15.818 21:39:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:15.818 21:39:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:15.818 21:39:38 -- host/auth.sh@44 -- # digest=sha384 00:24:15.818 21:39:38 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:15.818 21:39:38 -- host/auth.sh@44 -- # keyid=0 00:24:15.818 21:39:38 -- host/auth.sh@45 -- # key=DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:15.818 21:39:38 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:15.818 21:39:38 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:15.818 21:39:38 -- host/auth.sh@49 -- # echo DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:15.818 21:39:38 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:24:15.818 21:39:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:15.818 21:39:38 -- host/auth.sh@68 -- # digest=sha384 00:24:15.818 21:39:38 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:15.818 21:39:38 -- host/auth.sh@68 -- # keyid=0 00:24:15.818 21:39:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:15.818 21:39:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.818 21:39:38 -- common/autotest_common.sh@10 -- # set +x 00:24:16.076 21:39:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.076 21:39:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:16.076 21:39:38 -- nvmf/common.sh@717 -- # local ip 00:24:16.076 21:39:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:16.076 21:39:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:16.076 21:39:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.076 21:39:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.076 21:39:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:16.076 21:39:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.076 21:39:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:16.076 21:39:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:16.076 21:39:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:16.076 21:39:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:16.076 21:39:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.076 21:39:38 -- common/autotest_common.sh@10 -- # set +x 00:24:16.076 nvme0n1 00:24:16.076 21:39:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.076 21:39:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.076 21:39:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.076 21:39:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:16.076 21:39:38 -- common/autotest_common.sh@10 -- # set +x 00:24:16.076 21:39:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.076 21:39:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.076 21:39:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.076 21:39:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.076 21:39:38 -- common/autotest_common.sh@10 -- # set +x 00:24:16.076 21:39:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.076 21:39:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:16.076 21:39:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:16.076 21:39:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:16.076 21:39:38 -- host/auth.sh@44 -- # digest=sha384 00:24:16.076 21:39:38 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:16.077 21:39:38 -- host/auth.sh@44 -- # keyid=1 00:24:16.077 21:39:38 -- host/auth.sh@45 -- # key=DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:16.077 21:39:38 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:16.077 21:39:38 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:16.077 21:39:38 -- host/auth.sh@49 -- # echo DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:16.077 21:39:38 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:24:16.077 21:39:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:16.077 21:39:38 -- host/auth.sh@68 -- # digest=sha384 00:24:16.077 21:39:38 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:16.077 21:39:38 -- host/auth.sh@68 -- # keyid=1 00:24:16.077 21:39:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:16.077 21:39:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.077 21:39:38 -- common/autotest_common.sh@10 -- # set +x 00:24:16.077 21:39:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.077 21:39:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:16.077 21:39:38 -- nvmf/common.sh@717 -- # local ip 00:24:16.077 21:39:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:16.077 21:39:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:16.077 21:39:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.077 21:39:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.077 21:39:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:16.077 21:39:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.077 21:39:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:16.077 21:39:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:16.077 21:39:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:16.077 21:39:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:16.077 21:39:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.077 21:39:38 -- common/autotest_common.sh@10 -- # set +x 00:24:16.335 nvme0n1 00:24:16.335 21:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.335 21:39:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.335 21:39:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:16.335 21:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.335 21:39:39 -- common/autotest_common.sh@10 -- # set +x 00:24:16.335 21:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.335 21:39:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.335 21:39:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.335 21:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.335 21:39:39 -- common/autotest_common.sh@10 -- # set +x 00:24:16.335 21:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.335 21:39:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:16.335 21:39:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:16.335 21:39:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:16.335 21:39:39 -- host/auth.sh@44 -- # digest=sha384 00:24:16.335 21:39:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:16.335 21:39:39 -- host/auth.sh@44 -- # keyid=2 00:24:16.335 21:39:39 -- host/auth.sh@45 -- # key=DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:16.335 21:39:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:16.335 21:39:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:16.335 21:39:39 -- host/auth.sh@49 -- # echo DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:16.335 21:39:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:24:16.335 21:39:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:16.335 21:39:39 -- host/auth.sh@68 -- # digest=sha384 00:24:16.335 21:39:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:16.335 21:39:39 -- host/auth.sh@68 -- # keyid=2 00:24:16.335 21:39:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:16.335 21:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.335 21:39:39 -- common/autotest_common.sh@10 -- # set +x 00:24:16.335 21:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.335 21:39:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:16.335 21:39:39 -- nvmf/common.sh@717 -- # local ip 00:24:16.335 21:39:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:16.335 21:39:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:16.335 21:39:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.335 21:39:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.335 21:39:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:16.335 21:39:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.335 21:39:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:16.335 21:39:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:16.335 21:39:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:16.335 21:39:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:16.335 21:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.335 21:39:39 -- common/autotest_common.sh@10 -- # set +x 00:24:16.593 nvme0n1 00:24:16.593 21:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.593 21:39:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.593 21:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.593 21:39:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:16.593 21:39:39 -- common/autotest_common.sh@10 -- # set +x 00:24:16.593 21:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.593 21:39:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.593 21:39:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.593 21:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.593 21:39:39 -- common/autotest_common.sh@10 -- # set +x 00:24:16.593 21:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.593 21:39:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:16.593 21:39:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:16.593 21:39:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:16.593 21:39:39 -- host/auth.sh@44 -- # digest=sha384 00:24:16.593 21:39:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:16.593 21:39:39 -- host/auth.sh@44 -- # keyid=3 00:24:16.593 21:39:39 -- host/auth.sh@45 -- # key=DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:16.593 21:39:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:16.593 21:39:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:16.593 21:39:39 -- host/auth.sh@49 -- # echo DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:16.593 21:39:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:24:16.593 21:39:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:16.593 21:39:39 -- host/auth.sh@68 -- # digest=sha384 00:24:16.593 21:39:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:16.593 21:39:39 -- host/auth.sh@68 -- # keyid=3 00:24:16.593 21:39:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:16.593 21:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.593 21:39:39 -- common/autotest_common.sh@10 -- # set +x 00:24:16.593 21:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.593 21:39:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:16.593 21:39:39 -- nvmf/common.sh@717 -- # local ip 00:24:16.593 21:39:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:16.593 21:39:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:16.593 21:39:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.593 21:39:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.593 21:39:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:16.593 21:39:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.593 21:39:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:16.593 21:39:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:16.593 21:39:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:16.593 21:39:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:16.593 21:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.593 21:39:39 -- common/autotest_common.sh@10 -- # set +x 00:24:16.851 nvme0n1 00:24:16.851 21:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.851 21:39:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.851 21:39:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:16.851 21:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.851 21:39:39 -- common/autotest_common.sh@10 -- # set +x 00:24:16.851 21:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.851 21:39:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.851 21:39:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.851 21:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.851 21:39:39 -- common/autotest_common.sh@10 -- # set +x 00:24:16.851 21:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.851 21:39:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:16.851 21:39:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:16.851 21:39:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:16.851 21:39:39 -- host/auth.sh@44 -- # digest=sha384 00:24:16.851 21:39:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:16.851 21:39:39 -- host/auth.sh@44 -- # keyid=4 00:24:16.851 21:39:39 -- host/auth.sh@45 -- # key=DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:16.851 21:39:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:16.851 21:39:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:16.851 21:39:39 -- host/auth.sh@49 -- # echo DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:16.851 21:39:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:24:16.851 21:39:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:16.851 21:39:39 -- host/auth.sh@68 -- # digest=sha384 00:24:16.851 21:39:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:16.851 21:39:39 -- host/auth.sh@68 -- # keyid=4 00:24:16.851 21:39:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:16.851 21:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.851 21:39:39 -- common/autotest_common.sh@10 -- # set +x 00:24:16.851 21:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.851 21:39:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:16.851 21:39:39 -- nvmf/common.sh@717 -- # local ip 00:24:16.851 21:39:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:16.851 21:39:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:16.851 21:39:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.851 21:39:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.851 21:39:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:16.851 21:39:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.851 21:39:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:16.851 21:39:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:16.851 21:39:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:16.851 21:39:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:16.851 21:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.851 21:39:39 -- common/autotest_common.sh@10 -- # set +x 00:24:17.110 nvme0n1 00:24:17.110 21:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.110 21:39:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.110 21:39:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:17.110 21:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.110 21:39:39 -- common/autotest_common.sh@10 -- # set +x 00:24:17.110 21:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.110 21:39:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.110 21:39:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.110 21:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.110 21:39:39 -- common/autotest_common.sh@10 -- # set +x 00:24:17.110 21:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.110 21:39:39 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:17.110 21:39:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:17.110 21:39:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:17.110 21:39:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:17.110 21:39:39 -- host/auth.sh@44 -- # digest=sha384 00:24:17.110 21:39:39 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:17.110 21:39:39 -- host/auth.sh@44 -- # keyid=0 00:24:17.110 21:39:39 -- host/auth.sh@45 -- # key=DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:17.110 21:39:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:17.110 21:39:39 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:17.110 21:39:39 -- host/auth.sh@49 -- # echo DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:17.110 21:39:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:24:17.110 21:39:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:17.110 21:39:39 -- host/auth.sh@68 -- # digest=sha384 00:24:17.110 21:39:39 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:17.110 21:39:39 -- host/auth.sh@68 -- # keyid=0 00:24:17.110 21:39:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:17.110 21:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.110 21:39:39 -- common/autotest_common.sh@10 -- # set +x 00:24:17.110 21:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.110 21:39:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:17.110 21:39:39 -- nvmf/common.sh@717 -- # local ip 00:24:17.110 21:39:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:17.110 21:39:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:17.110 21:39:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.110 21:39:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.110 21:39:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:17.110 21:39:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.110 21:39:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:17.110 21:39:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:17.110 21:39:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:17.110 21:39:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:17.110 21:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.110 21:39:39 -- common/autotest_common.sh@10 -- # set +x 00:24:17.368 nvme0n1 00:24:17.368 21:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.368 21:39:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.368 21:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.368 21:39:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:17.368 21:39:40 -- common/autotest_common.sh@10 -- # set +x 00:24:17.368 21:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.368 21:39:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.368 21:39:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.368 21:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.368 21:39:40 -- common/autotest_common.sh@10 -- # set +x 00:24:17.368 21:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.368 21:39:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:17.368 21:39:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:17.368 21:39:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:17.368 21:39:40 -- host/auth.sh@44 -- # digest=sha384 00:24:17.368 21:39:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:17.368 21:39:40 -- host/auth.sh@44 -- # keyid=1 00:24:17.368 21:39:40 -- host/auth.sh@45 -- # key=DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:17.368 21:39:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:17.368 21:39:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:17.368 21:39:40 -- host/auth.sh@49 -- # echo DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:17.368 21:39:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:24:17.368 21:39:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:17.368 21:39:40 -- host/auth.sh@68 -- # digest=sha384 00:24:17.368 21:39:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:17.368 21:39:40 -- host/auth.sh@68 -- # keyid=1 00:24:17.368 21:39:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:17.368 21:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.368 21:39:40 -- common/autotest_common.sh@10 -- # set +x 00:24:17.368 21:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.368 21:39:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:17.368 21:39:40 -- nvmf/common.sh@717 -- # local ip 00:24:17.368 21:39:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:17.368 21:39:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:17.368 21:39:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.369 21:39:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.369 21:39:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:17.369 21:39:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.369 21:39:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:17.369 21:39:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:17.369 21:39:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:17.369 21:39:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:17.369 21:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.369 21:39:40 -- common/autotest_common.sh@10 -- # set +x 00:24:17.627 nvme0n1 00:24:17.627 21:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.627 21:39:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.627 21:39:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:17.627 21:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.627 21:39:40 -- common/autotest_common.sh@10 -- # set +x 00:24:17.627 21:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.627 21:39:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.627 21:39:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.627 21:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.627 21:39:40 -- common/autotest_common.sh@10 -- # set +x 00:24:17.627 21:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.627 21:39:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:17.627 21:39:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:17.627 21:39:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:17.627 21:39:40 -- host/auth.sh@44 -- # digest=sha384 00:24:17.627 21:39:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:17.627 21:39:40 -- host/auth.sh@44 -- # keyid=2 00:24:17.627 21:39:40 -- host/auth.sh@45 -- # key=DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:17.627 21:39:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:17.627 21:39:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:17.627 21:39:40 -- host/auth.sh@49 -- # echo DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:17.627 21:39:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:24:17.627 21:39:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:17.627 21:39:40 -- host/auth.sh@68 -- # digest=sha384 00:24:17.627 21:39:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:17.627 21:39:40 -- host/auth.sh@68 -- # keyid=2 00:24:17.627 21:39:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:17.627 21:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.627 21:39:40 -- common/autotest_common.sh@10 -- # set +x 00:24:17.627 21:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.627 21:39:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:17.627 21:39:40 -- nvmf/common.sh@717 -- # local ip 00:24:17.627 21:39:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:17.627 21:39:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:17.627 21:39:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.627 21:39:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.627 21:39:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:17.627 21:39:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.627 21:39:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:17.627 21:39:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:17.627 21:39:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:17.627 21:39:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:17.627 21:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.627 21:39:40 -- common/autotest_common.sh@10 -- # set +x 00:24:17.627 nvme0n1 00:24:17.627 21:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.885 21:39:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.885 21:39:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:17.885 21:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.885 21:39:40 -- common/autotest_common.sh@10 -- # set +x 00:24:17.885 21:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.885 21:39:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.885 21:39:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.885 21:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.885 21:39:40 -- common/autotest_common.sh@10 -- # set +x 00:24:17.885 21:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.885 21:39:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:17.885 21:39:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:17.885 21:39:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:17.885 21:39:40 -- host/auth.sh@44 -- # digest=sha384 00:24:17.885 21:39:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:17.886 21:39:40 -- host/auth.sh@44 -- # keyid=3 00:24:17.886 21:39:40 -- host/auth.sh@45 -- # key=DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:17.886 21:39:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:17.886 21:39:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:17.886 21:39:40 -- host/auth.sh@49 -- # echo DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:17.886 21:39:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:24:17.886 21:39:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:17.886 21:39:40 -- host/auth.sh@68 -- # digest=sha384 00:24:17.886 21:39:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:17.886 21:39:40 -- host/auth.sh@68 -- # keyid=3 00:24:17.886 21:39:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:17.886 21:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.886 21:39:40 -- common/autotest_common.sh@10 -- # set +x 00:24:17.886 21:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.886 21:39:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:17.886 21:39:40 -- nvmf/common.sh@717 -- # local ip 00:24:17.886 21:39:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:17.886 21:39:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:17.886 21:39:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.886 21:39:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.886 21:39:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:17.886 21:39:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.886 21:39:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:17.886 21:39:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:17.886 21:39:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:17.886 21:39:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:17.886 21:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.886 21:39:40 -- common/autotest_common.sh@10 -- # set +x 00:24:17.886 nvme0n1 00:24:17.886 21:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.886 21:39:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.886 21:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.886 21:39:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:17.886 21:39:40 -- common/autotest_common.sh@10 -- # set +x 00:24:18.144 21:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.144 21:39:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.144 21:39:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.144 21:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.144 21:39:40 -- common/autotest_common.sh@10 -- # set +x 00:24:18.144 21:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.144 21:39:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:18.144 21:39:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:18.144 21:39:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:18.144 21:39:40 -- host/auth.sh@44 -- # digest=sha384 00:24:18.144 21:39:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:18.144 21:39:40 -- host/auth.sh@44 -- # keyid=4 00:24:18.144 21:39:40 -- host/auth.sh@45 -- # key=DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:18.144 21:39:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:18.144 21:39:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:18.144 21:39:40 -- host/auth.sh@49 -- # echo DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:18.144 21:39:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:24:18.144 21:39:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:18.144 21:39:40 -- host/auth.sh@68 -- # digest=sha384 00:24:18.144 21:39:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:18.144 21:39:40 -- host/auth.sh@68 -- # keyid=4 00:24:18.144 21:39:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:18.144 21:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.144 21:39:40 -- common/autotest_common.sh@10 -- # set +x 00:24:18.144 21:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.144 21:39:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:18.144 21:39:40 -- nvmf/common.sh@717 -- # local ip 00:24:18.144 21:39:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:18.144 21:39:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:18.144 21:39:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.144 21:39:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.144 21:39:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:18.144 21:39:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.144 21:39:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:18.144 21:39:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:18.144 21:39:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:18.144 21:39:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:18.144 21:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.144 21:39:40 -- common/autotest_common.sh@10 -- # set +x 00:24:18.144 nvme0n1 00:24:18.144 21:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.144 21:39:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.144 21:39:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:18.144 21:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.144 21:39:41 -- common/autotest_common.sh@10 -- # set +x 00:24:18.144 21:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.403 21:39:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.403 21:39:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.403 21:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.403 21:39:41 -- common/autotest_common.sh@10 -- # set +x 00:24:18.403 21:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.403 21:39:41 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.403 21:39:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:18.403 21:39:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:18.403 21:39:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:18.403 21:39:41 -- host/auth.sh@44 -- # digest=sha384 00:24:18.403 21:39:41 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:18.403 21:39:41 -- host/auth.sh@44 -- # keyid=0 00:24:18.403 21:39:41 -- host/auth.sh@45 -- # key=DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:18.403 21:39:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:18.403 21:39:41 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:18.403 21:39:41 -- host/auth.sh@49 -- # echo DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:18.403 21:39:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:24:18.403 21:39:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:18.403 21:39:41 -- host/auth.sh@68 -- # digest=sha384 00:24:18.403 21:39:41 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:18.403 21:39:41 -- host/auth.sh@68 -- # keyid=0 00:24:18.403 21:39:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:18.403 21:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.403 21:39:41 -- common/autotest_common.sh@10 -- # set +x 00:24:18.403 21:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.403 21:39:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:18.403 21:39:41 -- nvmf/common.sh@717 -- # local ip 00:24:18.403 21:39:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:18.403 21:39:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:18.403 21:39:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.403 21:39:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.403 21:39:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:18.403 21:39:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.403 21:39:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:18.403 21:39:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:18.403 21:39:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:18.403 21:39:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:18.403 21:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.403 21:39:41 -- common/autotest_common.sh@10 -- # set +x 00:24:18.661 nvme0n1 00:24:18.661 21:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.661 21:39:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.661 21:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.661 21:39:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:18.661 21:39:41 -- common/autotest_common.sh@10 -- # set +x 00:24:18.661 21:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.661 21:39:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.661 21:39:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.661 21:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.661 21:39:41 -- common/autotest_common.sh@10 -- # set +x 00:24:18.661 21:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.661 21:39:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:18.661 21:39:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:18.661 21:39:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:18.661 21:39:41 -- host/auth.sh@44 -- # digest=sha384 00:24:18.661 21:39:41 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:18.661 21:39:41 -- host/auth.sh@44 -- # keyid=1 00:24:18.661 21:39:41 -- host/auth.sh@45 -- # key=DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:18.661 21:39:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:18.661 21:39:41 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:18.661 21:39:41 -- host/auth.sh@49 -- # echo DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:18.661 21:39:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:24:18.661 21:39:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:18.661 21:39:41 -- host/auth.sh@68 -- # digest=sha384 00:24:18.661 21:39:41 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:18.661 21:39:41 -- host/auth.sh@68 -- # keyid=1 00:24:18.661 21:39:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:18.661 21:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.661 21:39:41 -- common/autotest_common.sh@10 -- # set +x 00:24:18.661 21:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.661 21:39:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:18.661 21:39:41 -- nvmf/common.sh@717 -- # local ip 00:24:18.661 21:39:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:18.661 21:39:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:18.661 21:39:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.661 21:39:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.661 21:39:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:18.661 21:39:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.661 21:39:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:18.661 21:39:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:18.661 21:39:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:18.661 21:39:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:18.661 21:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.661 21:39:41 -- common/autotest_common.sh@10 -- # set +x 00:24:18.920 nvme0n1 00:24:18.920 21:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.920 21:39:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.920 21:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.920 21:39:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:18.920 21:39:41 -- common/autotest_common.sh@10 -- # set +x 00:24:18.920 21:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.920 21:39:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.920 21:39:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.920 21:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.920 21:39:41 -- common/autotest_common.sh@10 -- # set +x 00:24:18.920 21:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.920 21:39:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:18.920 21:39:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:18.920 21:39:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:18.920 21:39:41 -- host/auth.sh@44 -- # digest=sha384 00:24:18.920 21:39:41 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:18.920 21:39:41 -- host/auth.sh@44 -- # keyid=2 00:24:18.920 21:39:41 -- host/auth.sh@45 -- # key=DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:18.920 21:39:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:18.920 21:39:41 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:18.920 21:39:41 -- host/auth.sh@49 -- # echo DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:18.920 21:39:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:24:18.920 21:39:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:18.920 21:39:41 -- host/auth.sh@68 -- # digest=sha384 00:24:18.920 21:39:41 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:18.920 21:39:41 -- host/auth.sh@68 -- # keyid=2 00:24:18.920 21:39:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:18.920 21:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.920 21:39:41 -- common/autotest_common.sh@10 -- # set +x 00:24:18.920 21:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.920 21:39:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:18.920 21:39:41 -- nvmf/common.sh@717 -- # local ip 00:24:18.920 21:39:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:18.920 21:39:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:18.920 21:39:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.920 21:39:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.920 21:39:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:18.920 21:39:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.920 21:39:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:18.920 21:39:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:18.920 21:39:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:18.920 21:39:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:18.920 21:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.920 21:39:41 -- common/autotest_common.sh@10 -- # set +x 00:24:19.178 nvme0n1 00:24:19.178 21:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.178 21:39:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.178 21:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.178 21:39:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:19.178 21:39:41 -- common/autotest_common.sh@10 -- # set +x 00:24:19.178 21:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.178 21:39:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.178 21:39:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.178 21:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.178 21:39:42 -- common/autotest_common.sh@10 -- # set +x 00:24:19.178 21:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.178 21:39:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:19.178 21:39:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:19.178 21:39:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:19.178 21:39:42 -- host/auth.sh@44 -- # digest=sha384 00:24:19.178 21:39:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:19.178 21:39:42 -- host/auth.sh@44 -- # keyid=3 00:24:19.178 21:39:42 -- host/auth.sh@45 -- # key=DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:19.178 21:39:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:19.178 21:39:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:19.178 21:39:42 -- host/auth.sh@49 -- # echo DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:19.178 21:39:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:24:19.178 21:39:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:19.178 21:39:42 -- host/auth.sh@68 -- # digest=sha384 00:24:19.178 21:39:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:19.178 21:39:42 -- host/auth.sh@68 -- # keyid=3 00:24:19.178 21:39:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:19.178 21:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.178 21:39:42 -- common/autotest_common.sh@10 -- # set +x 00:24:19.178 21:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.178 21:39:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:19.178 21:39:42 -- nvmf/common.sh@717 -- # local ip 00:24:19.178 21:39:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:19.178 21:39:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:19.178 21:39:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.178 21:39:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.178 21:39:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:19.178 21:39:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.178 21:39:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:19.178 21:39:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:19.178 21:39:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:19.178 21:39:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:19.178 21:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.178 21:39:42 -- common/autotest_common.sh@10 -- # set +x 00:24:19.436 nvme0n1 00:24:19.436 21:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.436 21:39:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.436 21:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.436 21:39:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:19.436 21:39:42 -- common/autotest_common.sh@10 -- # set +x 00:24:19.436 21:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.436 21:39:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.436 21:39:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.436 21:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.436 21:39:42 -- common/autotest_common.sh@10 -- # set +x 00:24:19.693 21:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.693 21:39:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:19.693 21:39:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:19.693 21:39:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:19.693 21:39:42 -- host/auth.sh@44 -- # digest=sha384 00:24:19.693 21:39:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:19.693 21:39:42 -- host/auth.sh@44 -- # keyid=4 00:24:19.693 21:39:42 -- host/auth.sh@45 -- # key=DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:19.693 21:39:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:19.693 21:39:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:19.693 21:39:42 -- host/auth.sh@49 -- # echo DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:19.693 21:39:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:24:19.693 21:39:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:19.693 21:39:42 -- host/auth.sh@68 -- # digest=sha384 00:24:19.693 21:39:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:19.693 21:39:42 -- host/auth.sh@68 -- # keyid=4 00:24:19.693 21:39:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:19.693 21:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.693 21:39:42 -- common/autotest_common.sh@10 -- # set +x 00:24:19.693 21:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.693 21:39:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:19.693 21:39:42 -- nvmf/common.sh@717 -- # local ip 00:24:19.693 21:39:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:19.693 21:39:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:19.693 21:39:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.693 21:39:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.693 21:39:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:19.693 21:39:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.693 21:39:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:19.693 21:39:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:19.693 21:39:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:19.693 21:39:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:19.693 21:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.693 21:39:42 -- common/autotest_common.sh@10 -- # set +x 00:24:19.951 nvme0n1 00:24:19.951 21:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.951 21:39:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.951 21:39:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:19.951 21:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.951 21:39:42 -- common/autotest_common.sh@10 -- # set +x 00:24:19.951 21:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.951 21:39:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.951 21:39:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.951 21:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.951 21:39:42 -- common/autotest_common.sh@10 -- # set +x 00:24:19.951 21:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.951 21:39:42 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:19.951 21:39:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:19.951 21:39:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:19.951 21:39:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:19.951 21:39:42 -- host/auth.sh@44 -- # digest=sha384 00:24:19.951 21:39:42 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:19.951 21:39:42 -- host/auth.sh@44 -- # keyid=0 00:24:19.951 21:39:42 -- host/auth.sh@45 -- # key=DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:19.951 21:39:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:19.951 21:39:42 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:19.951 21:39:42 -- host/auth.sh@49 -- # echo DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:19.951 21:39:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:24:19.951 21:39:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:19.951 21:39:42 -- host/auth.sh@68 -- # digest=sha384 00:24:19.951 21:39:42 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:19.951 21:39:42 -- host/auth.sh@68 -- # keyid=0 00:24:19.951 21:39:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:19.951 21:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.951 21:39:42 -- common/autotest_common.sh@10 -- # set +x 00:24:19.951 21:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.951 21:39:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:19.951 21:39:42 -- nvmf/common.sh@717 -- # local ip 00:24:19.951 21:39:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:19.951 21:39:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:19.951 21:39:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.951 21:39:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.951 21:39:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:19.951 21:39:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.951 21:39:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:19.951 21:39:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:19.951 21:39:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:19.951 21:39:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:19.951 21:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.951 21:39:42 -- common/autotest_common.sh@10 -- # set +x 00:24:20.209 nvme0n1 00:24:20.209 21:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.209 21:39:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.209 21:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.209 21:39:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:20.209 21:39:43 -- common/autotest_common.sh@10 -- # set +x 00:24:20.209 21:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.209 21:39:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.209 21:39:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.209 21:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.209 21:39:43 -- common/autotest_common.sh@10 -- # set +x 00:24:20.209 21:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.209 21:39:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:20.209 21:39:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:20.209 21:39:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:20.209 21:39:43 -- host/auth.sh@44 -- # digest=sha384 00:24:20.209 21:39:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:20.209 21:39:43 -- host/auth.sh@44 -- # keyid=1 00:24:20.209 21:39:43 -- host/auth.sh@45 -- # key=DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:20.209 21:39:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:20.209 21:39:43 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:20.209 21:39:43 -- host/auth.sh@49 -- # echo DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:20.209 21:39:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:24:20.209 21:39:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:20.209 21:39:43 -- host/auth.sh@68 -- # digest=sha384 00:24:20.209 21:39:43 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:20.209 21:39:43 -- host/auth.sh@68 -- # keyid=1 00:24:20.209 21:39:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:20.209 21:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.210 21:39:43 -- common/autotest_common.sh@10 -- # set +x 00:24:20.210 21:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.210 21:39:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:20.467 21:39:43 -- nvmf/common.sh@717 -- # local ip 00:24:20.468 21:39:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:20.468 21:39:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:20.468 21:39:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.468 21:39:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.468 21:39:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:20.468 21:39:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.468 21:39:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:20.468 21:39:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:20.468 21:39:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:20.468 21:39:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:20.468 21:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.468 21:39:43 -- common/autotest_common.sh@10 -- # set +x 00:24:20.725 nvme0n1 00:24:20.726 21:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.726 21:39:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.726 21:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.726 21:39:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:20.726 21:39:43 -- common/autotest_common.sh@10 -- # set +x 00:24:20.726 21:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.726 21:39:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.726 21:39:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.726 21:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.726 21:39:43 -- common/autotest_common.sh@10 -- # set +x 00:24:20.726 21:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.726 21:39:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:20.726 21:39:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:20.726 21:39:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:20.726 21:39:43 -- host/auth.sh@44 -- # digest=sha384 00:24:20.726 21:39:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:20.726 21:39:43 -- host/auth.sh@44 -- # keyid=2 00:24:20.726 21:39:43 -- host/auth.sh@45 -- # key=DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:20.726 21:39:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:20.726 21:39:43 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:20.726 21:39:43 -- host/auth.sh@49 -- # echo DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:20.726 21:39:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:24:20.726 21:39:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:20.726 21:39:43 -- host/auth.sh@68 -- # digest=sha384 00:24:20.726 21:39:43 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:20.726 21:39:43 -- host/auth.sh@68 -- # keyid=2 00:24:20.726 21:39:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:20.726 21:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.726 21:39:43 -- common/autotest_common.sh@10 -- # set +x 00:24:20.726 21:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.726 21:39:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:20.726 21:39:43 -- nvmf/common.sh@717 -- # local ip 00:24:20.726 21:39:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:20.726 21:39:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:20.726 21:39:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.726 21:39:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.726 21:39:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:20.726 21:39:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.726 21:39:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:20.726 21:39:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:20.726 21:39:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:20.726 21:39:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:20.726 21:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.726 21:39:43 -- common/autotest_common.sh@10 -- # set +x 00:24:21.292 nvme0n1 00:24:21.292 21:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.292 21:39:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:21.292 21:39:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.292 21:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.292 21:39:43 -- common/autotest_common.sh@10 -- # set +x 00:24:21.292 21:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.292 21:39:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.292 21:39:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.292 21:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.292 21:39:43 -- common/autotest_common.sh@10 -- # set +x 00:24:21.292 21:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.292 21:39:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:21.292 21:39:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:21.292 21:39:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:21.292 21:39:43 -- host/auth.sh@44 -- # digest=sha384 00:24:21.292 21:39:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:21.292 21:39:43 -- host/auth.sh@44 -- # keyid=3 00:24:21.292 21:39:43 -- host/auth.sh@45 -- # key=DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:21.292 21:39:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:21.292 21:39:43 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:21.292 21:39:43 -- host/auth.sh@49 -- # echo DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:21.292 21:39:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:24:21.292 21:39:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:21.292 21:39:43 -- host/auth.sh@68 -- # digest=sha384 00:24:21.292 21:39:43 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:21.292 21:39:43 -- host/auth.sh@68 -- # keyid=3 00:24:21.292 21:39:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:21.292 21:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.292 21:39:43 -- common/autotest_common.sh@10 -- # set +x 00:24:21.292 21:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.292 21:39:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:21.292 21:39:43 -- nvmf/common.sh@717 -- # local ip 00:24:21.292 21:39:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:21.292 21:39:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:21.292 21:39:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.292 21:39:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.292 21:39:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:21.292 21:39:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.292 21:39:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:21.292 21:39:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:21.292 21:39:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:21.292 21:39:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:21.292 21:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.292 21:39:43 -- common/autotest_common.sh@10 -- # set +x 00:24:21.550 nvme0n1 00:24:21.550 21:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.550 21:39:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:21.550 21:39:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.550 21:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.550 21:39:44 -- common/autotest_common.sh@10 -- # set +x 00:24:21.550 21:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.550 21:39:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.550 21:39:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.550 21:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.550 21:39:44 -- common/autotest_common.sh@10 -- # set +x 00:24:21.550 21:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.550 21:39:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:21.550 21:39:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:21.550 21:39:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:21.550 21:39:44 -- host/auth.sh@44 -- # digest=sha384 00:24:21.550 21:39:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:21.550 21:39:44 -- host/auth.sh@44 -- # keyid=4 00:24:21.550 21:39:44 -- host/auth.sh@45 -- # key=DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:21.550 21:39:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:21.550 21:39:44 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:21.550 21:39:44 -- host/auth.sh@49 -- # echo DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:21.550 21:39:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:24:21.550 21:39:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:21.550 21:39:44 -- host/auth.sh@68 -- # digest=sha384 00:24:21.550 21:39:44 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:21.550 21:39:44 -- host/auth.sh@68 -- # keyid=4 00:24:21.550 21:39:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:21.550 21:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.550 21:39:44 -- common/autotest_common.sh@10 -- # set +x 00:24:21.550 21:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.550 21:39:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:21.550 21:39:44 -- nvmf/common.sh@717 -- # local ip 00:24:21.550 21:39:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:21.550 21:39:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:21.550 21:39:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.550 21:39:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.550 21:39:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:21.550 21:39:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.550 21:39:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:21.550 21:39:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:21.550 21:39:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:21.550 21:39:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:21.550 21:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.550 21:39:44 -- common/autotest_common.sh@10 -- # set +x 00:24:22.116 nvme0n1 00:24:22.116 21:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.116 21:39:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.116 21:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.116 21:39:44 -- common/autotest_common.sh@10 -- # set +x 00:24:22.116 21:39:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:22.116 21:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.116 21:39:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.116 21:39:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.116 21:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.116 21:39:44 -- common/autotest_common.sh@10 -- # set +x 00:24:22.116 21:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.116 21:39:44 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.116 21:39:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:22.116 21:39:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:22.116 21:39:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:22.116 21:39:44 -- host/auth.sh@44 -- # digest=sha384 00:24:22.116 21:39:44 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:22.116 21:39:44 -- host/auth.sh@44 -- # keyid=0 00:24:22.116 21:39:44 -- host/auth.sh@45 -- # key=DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:22.116 21:39:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:22.116 21:39:44 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:22.116 21:39:44 -- host/auth.sh@49 -- # echo DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:22.116 21:39:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:24:22.116 21:39:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:22.116 21:39:44 -- host/auth.sh@68 -- # digest=sha384 00:24:22.116 21:39:44 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:22.116 21:39:44 -- host/auth.sh@68 -- # keyid=0 00:24:22.116 21:39:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:22.116 21:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.116 21:39:44 -- common/autotest_common.sh@10 -- # set +x 00:24:22.116 21:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.116 21:39:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:22.116 21:39:44 -- nvmf/common.sh@717 -- # local ip 00:24:22.116 21:39:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:22.116 21:39:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:22.116 21:39:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.116 21:39:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.116 21:39:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:22.117 21:39:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.117 21:39:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:22.117 21:39:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:22.117 21:39:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:22.117 21:39:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:22.117 21:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.117 21:39:44 -- common/autotest_common.sh@10 -- # set +x 00:24:22.682 nvme0n1 00:24:22.682 21:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.682 21:39:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.682 21:39:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:22.682 21:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.682 21:39:45 -- common/autotest_common.sh@10 -- # set +x 00:24:22.682 21:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.682 21:39:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.682 21:39:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.682 21:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.682 21:39:45 -- common/autotest_common.sh@10 -- # set +x 00:24:22.682 21:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.682 21:39:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:22.682 21:39:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:22.682 21:39:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:22.682 21:39:45 -- host/auth.sh@44 -- # digest=sha384 00:24:22.682 21:39:45 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:22.682 21:39:45 -- host/auth.sh@44 -- # keyid=1 00:24:22.682 21:39:45 -- host/auth.sh@45 -- # key=DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:22.682 21:39:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:22.682 21:39:45 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:22.682 21:39:45 -- host/auth.sh@49 -- # echo DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:22.682 21:39:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:24:22.682 21:39:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:22.682 21:39:45 -- host/auth.sh@68 -- # digest=sha384 00:24:22.682 21:39:45 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:22.682 21:39:45 -- host/auth.sh@68 -- # keyid=1 00:24:22.682 21:39:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:22.682 21:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.682 21:39:45 -- common/autotest_common.sh@10 -- # set +x 00:24:22.682 21:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.682 21:39:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:22.682 21:39:45 -- nvmf/common.sh@717 -- # local ip 00:24:22.682 21:39:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:22.682 21:39:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:22.682 21:39:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.682 21:39:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.682 21:39:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:22.682 21:39:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.682 21:39:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:22.682 21:39:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:22.682 21:39:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:22.682 21:39:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:22.682 21:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.682 21:39:45 -- common/autotest_common.sh@10 -- # set +x 00:24:23.248 nvme0n1 00:24:23.248 21:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.248 21:39:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.248 21:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.248 21:39:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:23.248 21:39:46 -- common/autotest_common.sh@10 -- # set +x 00:24:23.248 21:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.248 21:39:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.248 21:39:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.248 21:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.248 21:39:46 -- common/autotest_common.sh@10 -- # set +x 00:24:23.248 21:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.248 21:39:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:23.248 21:39:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:23.248 21:39:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:23.248 21:39:46 -- host/auth.sh@44 -- # digest=sha384 00:24:23.248 21:39:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:23.248 21:39:46 -- host/auth.sh@44 -- # keyid=2 00:24:23.248 21:39:46 -- host/auth.sh@45 -- # key=DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:23.248 21:39:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:23.248 21:39:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:23.248 21:39:46 -- host/auth.sh@49 -- # echo DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:23.248 21:39:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:24:23.248 21:39:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:23.248 21:39:46 -- host/auth.sh@68 -- # digest=sha384 00:24:23.248 21:39:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:23.248 21:39:46 -- host/auth.sh@68 -- # keyid=2 00:24:23.248 21:39:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:23.248 21:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.248 21:39:46 -- common/autotest_common.sh@10 -- # set +x 00:24:23.248 21:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.248 21:39:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:23.248 21:39:46 -- nvmf/common.sh@717 -- # local ip 00:24:23.248 21:39:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:23.248 21:39:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:23.248 21:39:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.248 21:39:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.248 21:39:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:23.248 21:39:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.248 21:39:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:23.248 21:39:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:23.248 21:39:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:23.248 21:39:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:23.248 21:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.248 21:39:46 -- common/autotest_common.sh@10 -- # set +x 00:24:23.813 nvme0n1 00:24:23.813 21:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.813 21:39:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.813 21:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.813 21:39:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:23.813 21:39:46 -- common/autotest_common.sh@10 -- # set +x 00:24:23.813 21:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.813 21:39:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.813 21:39:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.813 21:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.813 21:39:46 -- common/autotest_common.sh@10 -- # set +x 00:24:24.071 21:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.071 21:39:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:24.071 21:39:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:24.071 21:39:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:24.071 21:39:46 -- host/auth.sh@44 -- # digest=sha384 00:24:24.071 21:39:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:24.071 21:39:46 -- host/auth.sh@44 -- # keyid=3 00:24:24.071 21:39:46 -- host/auth.sh@45 -- # key=DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:24.071 21:39:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:24.071 21:39:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:24.071 21:39:46 -- host/auth.sh@49 -- # echo DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:24.071 21:39:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:24:24.071 21:39:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:24.071 21:39:46 -- host/auth.sh@68 -- # digest=sha384 00:24:24.071 21:39:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:24.071 21:39:46 -- host/auth.sh@68 -- # keyid=3 00:24:24.071 21:39:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:24.071 21:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.071 21:39:46 -- common/autotest_common.sh@10 -- # set +x 00:24:24.071 21:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.071 21:39:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:24.071 21:39:46 -- nvmf/common.sh@717 -- # local ip 00:24:24.071 21:39:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.071 21:39:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.071 21:39:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.071 21:39:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.071 21:39:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:24.071 21:39:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.071 21:39:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:24.071 21:39:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:24.071 21:39:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:24.071 21:39:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:24.071 21:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.071 21:39:46 -- common/autotest_common.sh@10 -- # set +x 00:24:24.637 nvme0n1 00:24:24.637 21:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.637 21:39:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.637 21:39:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:24.637 21:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.637 21:39:47 -- common/autotest_common.sh@10 -- # set +x 00:24:24.637 21:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.637 21:39:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.637 21:39:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.637 21:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.637 21:39:47 -- common/autotest_common.sh@10 -- # set +x 00:24:24.637 21:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.637 21:39:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:24.637 21:39:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:24.637 21:39:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:24.637 21:39:47 -- host/auth.sh@44 -- # digest=sha384 00:24:24.637 21:39:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:24.637 21:39:47 -- host/auth.sh@44 -- # keyid=4 00:24:24.637 21:39:47 -- host/auth.sh@45 -- # key=DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:24.637 21:39:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:24.637 21:39:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:24.637 21:39:47 -- host/auth.sh@49 -- # echo DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:24.637 21:39:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:24:24.637 21:39:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:24.637 21:39:47 -- host/auth.sh@68 -- # digest=sha384 00:24:24.637 21:39:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:24.637 21:39:47 -- host/auth.sh@68 -- # keyid=4 00:24:24.637 21:39:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:24.637 21:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.637 21:39:47 -- common/autotest_common.sh@10 -- # set +x 00:24:24.637 21:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.637 21:39:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:24.637 21:39:47 -- nvmf/common.sh@717 -- # local ip 00:24:24.637 21:39:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.637 21:39:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.637 21:39:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.637 21:39:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.637 21:39:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:24.637 21:39:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.637 21:39:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:24.637 21:39:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:24.637 21:39:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:24.637 21:39:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:24.637 21:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.637 21:39:47 -- common/autotest_common.sh@10 -- # set +x 00:24:25.203 nvme0n1 00:24:25.203 21:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.203 21:39:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.203 21:39:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.203 21:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.203 21:39:47 -- common/autotest_common.sh@10 -- # set +x 00:24:25.203 21:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.203 21:39:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.203 21:39:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.203 21:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.203 21:39:47 -- common/autotest_common.sh@10 -- # set +x 00:24:25.203 21:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.203 21:39:47 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:25.203 21:39:47 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:25.203 21:39:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.203 21:39:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:25.203 21:39:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.203 21:39:47 -- host/auth.sh@44 -- # digest=sha512 00:24:25.203 21:39:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:25.203 21:39:47 -- host/auth.sh@44 -- # keyid=0 00:24:25.203 21:39:47 -- host/auth.sh@45 -- # key=DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:25.203 21:39:47 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:25.203 21:39:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:25.203 21:39:47 -- host/auth.sh@49 -- # echo DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:25.203 21:39:47 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:24:25.203 21:39:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.203 21:39:47 -- host/auth.sh@68 -- # digest=sha512 00:24:25.203 21:39:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:25.203 21:39:47 -- host/auth.sh@68 -- # keyid=0 00:24:25.203 21:39:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:25.203 21:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.203 21:39:47 -- common/autotest_common.sh@10 -- # set +x 00:24:25.203 21:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.203 21:39:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.203 21:39:47 -- nvmf/common.sh@717 -- # local ip 00:24:25.203 21:39:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.203 21:39:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.203 21:39:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.203 21:39:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.203 21:39:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.203 21:39:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.203 21:39:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.203 21:39:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.203 21:39:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.203 21:39:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:25.203 21:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.203 21:39:47 -- common/autotest_common.sh@10 -- # set +x 00:24:25.461 nvme0n1 00:24:25.461 21:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.461 21:39:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.461 21:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.461 21:39:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.461 21:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:25.461 21:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.461 21:39:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.461 21:39:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.461 21:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.461 21:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:25.461 21:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.461 21:39:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.461 21:39:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:25.461 21:39:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.461 21:39:48 -- host/auth.sh@44 -- # digest=sha512 00:24:25.461 21:39:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:25.461 21:39:48 -- host/auth.sh@44 -- # keyid=1 00:24:25.461 21:39:48 -- host/auth.sh@45 -- # key=DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:25.461 21:39:48 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:25.461 21:39:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:25.461 21:39:48 -- host/auth.sh@49 -- # echo DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:25.461 21:39:48 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:24:25.461 21:39:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.461 21:39:48 -- host/auth.sh@68 -- # digest=sha512 00:24:25.461 21:39:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:25.461 21:39:48 -- host/auth.sh@68 -- # keyid=1 00:24:25.461 21:39:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:25.461 21:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.461 21:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:25.461 21:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.461 21:39:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.461 21:39:48 -- nvmf/common.sh@717 -- # local ip 00:24:25.461 21:39:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.461 21:39:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.461 21:39:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.461 21:39:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.461 21:39:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.461 21:39:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.462 21:39:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.462 21:39:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.462 21:39:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.462 21:39:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:25.462 21:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.462 21:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:25.720 nvme0n1 00:24:25.720 21:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.720 21:39:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.720 21:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.720 21:39:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.720 21:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:25.720 21:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.720 21:39:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.720 21:39:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.720 21:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.720 21:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:25.720 21:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.720 21:39:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.720 21:39:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:25.720 21:39:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.720 21:39:48 -- host/auth.sh@44 -- # digest=sha512 00:24:25.720 21:39:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:25.720 21:39:48 -- host/auth.sh@44 -- # keyid=2 00:24:25.720 21:39:48 -- host/auth.sh@45 -- # key=DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:25.720 21:39:48 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:25.720 21:39:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:25.720 21:39:48 -- host/auth.sh@49 -- # echo DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:25.720 21:39:48 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:24:25.720 21:39:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.720 21:39:48 -- host/auth.sh@68 -- # digest=sha512 00:24:25.720 21:39:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:25.720 21:39:48 -- host/auth.sh@68 -- # keyid=2 00:24:25.720 21:39:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:25.720 21:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.720 21:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:25.720 21:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.720 21:39:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.720 21:39:48 -- nvmf/common.sh@717 -- # local ip 00:24:25.720 21:39:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.720 21:39:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.720 21:39:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.720 21:39:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.720 21:39:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.720 21:39:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.720 21:39:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.720 21:39:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.720 21:39:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.720 21:39:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:25.720 21:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.720 21:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:25.720 nvme0n1 00:24:25.720 21:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.720 21:39:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.720 21:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.720 21:39:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.720 21:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:25.720 21:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.979 21:39:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.979 21:39:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.979 21:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.979 21:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:25.979 21:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.979 21:39:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.979 21:39:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:25.979 21:39:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.979 21:39:48 -- host/auth.sh@44 -- # digest=sha512 00:24:25.979 21:39:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:25.979 21:39:48 -- host/auth.sh@44 -- # keyid=3 00:24:25.979 21:39:48 -- host/auth.sh@45 -- # key=DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:25.979 21:39:48 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:25.979 21:39:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:25.979 21:39:48 -- host/auth.sh@49 -- # echo DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:25.979 21:39:48 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:24:25.979 21:39:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.979 21:39:48 -- host/auth.sh@68 -- # digest=sha512 00:24:25.979 21:39:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:25.979 21:39:48 -- host/auth.sh@68 -- # keyid=3 00:24:25.979 21:39:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:25.979 21:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.979 21:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:25.979 21:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.979 21:39:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.979 21:39:48 -- nvmf/common.sh@717 -- # local ip 00:24:25.979 21:39:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.979 21:39:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.979 21:39:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.979 21:39:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.979 21:39:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.979 21:39:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.979 21:39:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.979 21:39:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.979 21:39:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.979 21:39:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:25.979 21:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.979 21:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:25.979 nvme0n1 00:24:25.979 21:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.979 21:39:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.979 21:39:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.979 21:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.979 21:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:25.979 21:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.979 21:39:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.979 21:39:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.979 21:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.979 21:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:25.979 21:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.979 21:39:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.979 21:39:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:25.979 21:39:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.979 21:39:48 -- host/auth.sh@44 -- # digest=sha512 00:24:25.979 21:39:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:25.979 21:39:48 -- host/auth.sh@44 -- # keyid=4 00:24:25.979 21:39:48 -- host/auth.sh@45 -- # key=DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:25.979 21:39:48 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:25.979 21:39:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:25.979 21:39:48 -- host/auth.sh@49 -- # echo DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:25.979 21:39:48 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:24:25.979 21:39:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.979 21:39:48 -- host/auth.sh@68 -- # digest=sha512 00:24:25.979 21:39:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:25.979 21:39:48 -- host/auth.sh@68 -- # keyid=4 00:24:25.979 21:39:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:25.979 21:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.979 21:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:26.237 21:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.237 21:39:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.237 21:39:48 -- nvmf/common.sh@717 -- # local ip 00:24:26.237 21:39:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.237 21:39:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.237 21:39:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.237 21:39:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.237 21:39:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:26.237 21:39:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.237 21:39:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:26.237 21:39:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:26.237 21:39:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:26.237 21:39:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:26.237 21:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.237 21:39:48 -- common/autotest_common.sh@10 -- # set +x 00:24:26.237 nvme0n1 00:24:26.237 21:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.237 21:39:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.237 21:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.237 21:39:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:26.237 21:39:49 -- common/autotest_common.sh@10 -- # set +x 00:24:26.238 21:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.238 21:39:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.238 21:39:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.238 21:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.238 21:39:49 -- common/autotest_common.sh@10 -- # set +x 00:24:26.238 21:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.238 21:39:49 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.238 21:39:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:26.238 21:39:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:26.238 21:39:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:26.238 21:39:49 -- host/auth.sh@44 -- # digest=sha512 00:24:26.238 21:39:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:26.238 21:39:49 -- host/auth.sh@44 -- # keyid=0 00:24:26.238 21:39:49 -- host/auth.sh@45 -- # key=DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:26.238 21:39:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:26.238 21:39:49 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:26.238 21:39:49 -- host/auth.sh@49 -- # echo DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:26.238 21:39:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:24:26.238 21:39:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:26.238 21:39:49 -- host/auth.sh@68 -- # digest=sha512 00:24:26.238 21:39:49 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:26.238 21:39:49 -- host/auth.sh@68 -- # keyid=0 00:24:26.238 21:39:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:26.238 21:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.238 21:39:49 -- common/autotest_common.sh@10 -- # set +x 00:24:26.238 21:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.238 21:39:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.238 21:39:49 -- nvmf/common.sh@717 -- # local ip 00:24:26.238 21:39:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.238 21:39:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.238 21:39:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.238 21:39:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.238 21:39:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:26.238 21:39:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.238 21:39:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:26.238 21:39:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:26.238 21:39:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:26.238 21:39:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:26.238 21:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.238 21:39:49 -- common/autotest_common.sh@10 -- # set +x 00:24:26.496 nvme0n1 00:24:26.496 21:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.496 21:39:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.496 21:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.496 21:39:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:26.496 21:39:49 -- common/autotest_common.sh@10 -- # set +x 00:24:26.496 21:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.496 21:39:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.496 21:39:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.496 21:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.496 21:39:49 -- common/autotest_common.sh@10 -- # set +x 00:24:26.496 21:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.496 21:39:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:26.496 21:39:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:26.496 21:39:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:26.496 21:39:49 -- host/auth.sh@44 -- # digest=sha512 00:24:26.496 21:39:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:26.496 21:39:49 -- host/auth.sh@44 -- # keyid=1 00:24:26.496 21:39:49 -- host/auth.sh@45 -- # key=DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:26.496 21:39:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:26.496 21:39:49 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:26.496 21:39:49 -- host/auth.sh@49 -- # echo DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:26.496 21:39:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:24:26.496 21:39:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:26.496 21:39:49 -- host/auth.sh@68 -- # digest=sha512 00:24:26.496 21:39:49 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:26.496 21:39:49 -- host/auth.sh@68 -- # keyid=1 00:24:26.496 21:39:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:26.496 21:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.496 21:39:49 -- common/autotest_common.sh@10 -- # set +x 00:24:26.496 21:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.496 21:39:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.496 21:39:49 -- nvmf/common.sh@717 -- # local ip 00:24:26.496 21:39:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.496 21:39:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.496 21:39:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.496 21:39:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.496 21:39:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:26.496 21:39:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.496 21:39:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:26.496 21:39:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:26.496 21:39:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:26.496 21:39:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:26.496 21:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.496 21:39:49 -- common/autotest_common.sh@10 -- # set +x 00:24:26.754 nvme0n1 00:24:26.754 21:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.754 21:39:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:26.754 21:39:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.754 21:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.754 21:39:49 -- common/autotest_common.sh@10 -- # set +x 00:24:26.754 21:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.754 21:39:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.754 21:39:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.754 21:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.754 21:39:49 -- common/autotest_common.sh@10 -- # set +x 00:24:26.754 21:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.754 21:39:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:26.754 21:39:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:26.754 21:39:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:26.755 21:39:49 -- host/auth.sh@44 -- # digest=sha512 00:24:26.755 21:39:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:26.755 21:39:49 -- host/auth.sh@44 -- # keyid=2 00:24:26.755 21:39:49 -- host/auth.sh@45 -- # key=DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:26.755 21:39:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:26.755 21:39:49 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:26.755 21:39:49 -- host/auth.sh@49 -- # echo DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:26.755 21:39:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:24:26.755 21:39:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:26.755 21:39:49 -- host/auth.sh@68 -- # digest=sha512 00:24:26.755 21:39:49 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:26.755 21:39:49 -- host/auth.sh@68 -- # keyid=2 00:24:26.755 21:39:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:26.755 21:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.755 21:39:49 -- common/autotest_common.sh@10 -- # set +x 00:24:26.755 21:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.755 21:39:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.755 21:39:49 -- nvmf/common.sh@717 -- # local ip 00:24:26.755 21:39:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.755 21:39:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.755 21:39:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.755 21:39:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.755 21:39:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:26.755 21:39:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.755 21:39:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:26.755 21:39:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:26.755 21:39:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:26.755 21:39:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:26.755 21:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.755 21:39:49 -- common/autotest_common.sh@10 -- # set +x 00:24:27.013 nvme0n1 00:24:27.013 21:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.013 21:39:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.013 21:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.013 21:39:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:27.013 21:39:49 -- common/autotest_common.sh@10 -- # set +x 00:24:27.013 21:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.013 21:39:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.013 21:39:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.013 21:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.013 21:39:49 -- common/autotest_common.sh@10 -- # set +x 00:24:27.013 21:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.013 21:39:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:27.013 21:39:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:27.013 21:39:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:27.013 21:39:49 -- host/auth.sh@44 -- # digest=sha512 00:24:27.013 21:39:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:27.013 21:39:49 -- host/auth.sh@44 -- # keyid=3 00:24:27.013 21:39:49 -- host/auth.sh@45 -- # key=DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:27.013 21:39:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:27.013 21:39:49 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:27.013 21:39:49 -- host/auth.sh@49 -- # echo DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:27.013 21:39:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:24:27.013 21:39:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:27.013 21:39:49 -- host/auth.sh@68 -- # digest=sha512 00:24:27.013 21:39:49 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:27.013 21:39:49 -- host/auth.sh@68 -- # keyid=3 00:24:27.013 21:39:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:27.013 21:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.013 21:39:49 -- common/autotest_common.sh@10 -- # set +x 00:24:27.013 21:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.013 21:39:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:27.013 21:39:49 -- nvmf/common.sh@717 -- # local ip 00:24:27.013 21:39:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:27.013 21:39:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:27.013 21:39:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.013 21:39:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.013 21:39:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:27.013 21:39:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.013 21:39:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:27.013 21:39:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:27.013 21:39:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:27.013 21:39:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:27.013 21:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.013 21:39:49 -- common/autotest_common.sh@10 -- # set +x 00:24:27.271 nvme0n1 00:24:27.271 21:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.271 21:39:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:27.271 21:39:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.271 21:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.271 21:39:50 -- common/autotest_common.sh@10 -- # set +x 00:24:27.271 21:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.271 21:39:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.271 21:39:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.271 21:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.271 21:39:50 -- common/autotest_common.sh@10 -- # set +x 00:24:27.271 21:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.271 21:39:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:27.271 21:39:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:27.271 21:39:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:27.271 21:39:50 -- host/auth.sh@44 -- # digest=sha512 00:24:27.271 21:39:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:27.271 21:39:50 -- host/auth.sh@44 -- # keyid=4 00:24:27.271 21:39:50 -- host/auth.sh@45 -- # key=DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:27.271 21:39:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:27.271 21:39:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:27.271 21:39:50 -- host/auth.sh@49 -- # echo DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:27.271 21:39:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:24:27.271 21:39:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:27.271 21:39:50 -- host/auth.sh@68 -- # digest=sha512 00:24:27.271 21:39:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:27.271 21:39:50 -- host/auth.sh@68 -- # keyid=4 00:24:27.272 21:39:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:27.272 21:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.272 21:39:50 -- common/autotest_common.sh@10 -- # set +x 00:24:27.272 21:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.272 21:39:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:27.272 21:39:50 -- nvmf/common.sh@717 -- # local ip 00:24:27.272 21:39:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:27.272 21:39:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:27.272 21:39:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.272 21:39:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.272 21:39:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:27.272 21:39:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.272 21:39:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:27.272 21:39:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:27.272 21:39:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:27.272 21:39:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:27.272 21:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.272 21:39:50 -- common/autotest_common.sh@10 -- # set +x 00:24:27.538 nvme0n1 00:24:27.538 21:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.538 21:39:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.538 21:39:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:27.538 21:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.538 21:39:50 -- common/autotest_common.sh@10 -- # set +x 00:24:27.538 21:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.538 21:39:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.538 21:39:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.538 21:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.538 21:39:50 -- common/autotest_common.sh@10 -- # set +x 00:24:27.538 21:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.538 21:39:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:27.538 21:39:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:27.538 21:39:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:27.538 21:39:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:27.538 21:39:50 -- host/auth.sh@44 -- # digest=sha512 00:24:27.538 21:39:50 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:27.538 21:39:50 -- host/auth.sh@44 -- # keyid=0 00:24:27.538 21:39:50 -- host/auth.sh@45 -- # key=DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:27.538 21:39:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:27.538 21:39:50 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:27.538 21:39:50 -- host/auth.sh@49 -- # echo DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:27.538 21:39:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:24:27.538 21:39:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:27.538 21:39:50 -- host/auth.sh@68 -- # digest=sha512 00:24:27.538 21:39:50 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:27.538 21:39:50 -- host/auth.sh@68 -- # keyid=0 00:24:27.538 21:39:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:27.538 21:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.538 21:39:50 -- common/autotest_common.sh@10 -- # set +x 00:24:27.538 21:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.538 21:39:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:27.538 21:39:50 -- nvmf/common.sh@717 -- # local ip 00:24:27.538 21:39:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:27.538 21:39:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:27.538 21:39:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.538 21:39:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.538 21:39:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:27.538 21:39:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.538 21:39:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:27.538 21:39:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:27.538 21:39:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:27.538 21:39:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:27.538 21:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.538 21:39:50 -- common/autotest_common.sh@10 -- # set +x 00:24:27.800 nvme0n1 00:24:27.800 21:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.800 21:39:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:27.800 21:39:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.800 21:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.800 21:39:50 -- common/autotest_common.sh@10 -- # set +x 00:24:27.800 21:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.800 21:39:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.800 21:39:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.800 21:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.800 21:39:50 -- common/autotest_common.sh@10 -- # set +x 00:24:27.800 21:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.800 21:39:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:27.800 21:39:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:27.800 21:39:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:27.800 21:39:50 -- host/auth.sh@44 -- # digest=sha512 00:24:27.801 21:39:50 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:27.801 21:39:50 -- host/auth.sh@44 -- # keyid=1 00:24:27.801 21:39:50 -- host/auth.sh@45 -- # key=DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:27.801 21:39:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:27.801 21:39:50 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:27.801 21:39:50 -- host/auth.sh@49 -- # echo DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:27.801 21:39:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:24:27.801 21:39:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:27.801 21:39:50 -- host/auth.sh@68 -- # digest=sha512 00:24:27.801 21:39:50 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:27.801 21:39:50 -- host/auth.sh@68 -- # keyid=1 00:24:27.801 21:39:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:27.801 21:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.801 21:39:50 -- common/autotest_common.sh@10 -- # set +x 00:24:27.801 21:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.801 21:39:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:27.801 21:39:50 -- nvmf/common.sh@717 -- # local ip 00:24:27.801 21:39:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:27.801 21:39:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:27.801 21:39:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.801 21:39:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.801 21:39:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:27.801 21:39:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.801 21:39:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:27.801 21:39:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:27.801 21:39:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:27.801 21:39:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:27.801 21:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.801 21:39:50 -- common/autotest_common.sh@10 -- # set +x 00:24:28.059 nvme0n1 00:24:28.059 21:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.059 21:39:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.059 21:39:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:28.059 21:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.059 21:39:50 -- common/autotest_common.sh@10 -- # set +x 00:24:28.059 21:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.059 21:39:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.059 21:39:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.059 21:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.059 21:39:50 -- common/autotest_common.sh@10 -- # set +x 00:24:28.317 21:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.317 21:39:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:28.317 21:39:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:28.317 21:39:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:28.317 21:39:50 -- host/auth.sh@44 -- # digest=sha512 00:24:28.317 21:39:50 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:28.317 21:39:50 -- host/auth.sh@44 -- # keyid=2 00:24:28.317 21:39:50 -- host/auth.sh@45 -- # key=DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:28.317 21:39:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:28.317 21:39:50 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:28.317 21:39:50 -- host/auth.sh@49 -- # echo DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:28.317 21:39:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:24:28.317 21:39:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:28.317 21:39:50 -- host/auth.sh@68 -- # digest=sha512 00:24:28.317 21:39:50 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:28.317 21:39:50 -- host/auth.sh@68 -- # keyid=2 00:24:28.317 21:39:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:28.317 21:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.317 21:39:50 -- common/autotest_common.sh@10 -- # set +x 00:24:28.317 21:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.317 21:39:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:28.317 21:39:50 -- nvmf/common.sh@717 -- # local ip 00:24:28.317 21:39:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:28.317 21:39:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:28.317 21:39:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.317 21:39:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.317 21:39:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:28.317 21:39:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.317 21:39:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:28.317 21:39:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:28.317 21:39:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:28.317 21:39:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:28.317 21:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.317 21:39:50 -- common/autotest_common.sh@10 -- # set +x 00:24:28.576 nvme0n1 00:24:28.576 21:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.576 21:39:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.576 21:39:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:28.576 21:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.576 21:39:51 -- common/autotest_common.sh@10 -- # set +x 00:24:28.576 21:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.576 21:39:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.576 21:39:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.576 21:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.576 21:39:51 -- common/autotest_common.sh@10 -- # set +x 00:24:28.576 21:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.576 21:39:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:28.576 21:39:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:28.576 21:39:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:28.576 21:39:51 -- host/auth.sh@44 -- # digest=sha512 00:24:28.576 21:39:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:28.576 21:39:51 -- host/auth.sh@44 -- # keyid=3 00:24:28.576 21:39:51 -- host/auth.sh@45 -- # key=DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:28.576 21:39:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:28.576 21:39:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:28.576 21:39:51 -- host/auth.sh@49 -- # echo DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:28.576 21:39:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:24:28.576 21:39:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:28.576 21:39:51 -- host/auth.sh@68 -- # digest=sha512 00:24:28.576 21:39:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:28.576 21:39:51 -- host/auth.sh@68 -- # keyid=3 00:24:28.576 21:39:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:28.576 21:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.576 21:39:51 -- common/autotest_common.sh@10 -- # set +x 00:24:28.576 21:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.576 21:39:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:28.576 21:39:51 -- nvmf/common.sh@717 -- # local ip 00:24:28.576 21:39:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:28.576 21:39:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:28.576 21:39:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.576 21:39:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.576 21:39:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:28.576 21:39:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.576 21:39:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:28.576 21:39:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:28.576 21:39:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:28.576 21:39:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:28.576 21:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.576 21:39:51 -- common/autotest_common.sh@10 -- # set +x 00:24:28.835 nvme0n1 00:24:28.835 21:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.835 21:39:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.835 21:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.835 21:39:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:28.835 21:39:51 -- common/autotest_common.sh@10 -- # set +x 00:24:28.835 21:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.835 21:39:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.835 21:39:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.835 21:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.835 21:39:51 -- common/autotest_common.sh@10 -- # set +x 00:24:28.835 21:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.835 21:39:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:28.835 21:39:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:28.835 21:39:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:28.835 21:39:51 -- host/auth.sh@44 -- # digest=sha512 00:24:28.835 21:39:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:28.835 21:39:51 -- host/auth.sh@44 -- # keyid=4 00:24:28.835 21:39:51 -- host/auth.sh@45 -- # key=DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:28.835 21:39:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:28.835 21:39:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:28.835 21:39:51 -- host/auth.sh@49 -- # echo DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:28.835 21:39:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:24:28.835 21:39:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:28.835 21:39:51 -- host/auth.sh@68 -- # digest=sha512 00:24:28.835 21:39:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:28.835 21:39:51 -- host/auth.sh@68 -- # keyid=4 00:24:28.835 21:39:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:28.835 21:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.835 21:39:51 -- common/autotest_common.sh@10 -- # set +x 00:24:28.835 21:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.835 21:39:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:28.835 21:39:51 -- nvmf/common.sh@717 -- # local ip 00:24:28.835 21:39:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:28.835 21:39:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:28.835 21:39:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.835 21:39:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.835 21:39:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:28.835 21:39:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.835 21:39:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:28.835 21:39:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:28.835 21:39:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:28.835 21:39:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:28.835 21:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.835 21:39:51 -- common/autotest_common.sh@10 -- # set +x 00:24:29.093 nvme0n1 00:24:29.093 21:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.093 21:39:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.093 21:39:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:29.093 21:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.093 21:39:51 -- common/autotest_common.sh@10 -- # set +x 00:24:29.093 21:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.093 21:39:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.093 21:39:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.093 21:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.093 21:39:51 -- common/autotest_common.sh@10 -- # set +x 00:24:29.093 21:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.093 21:39:51 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:29.093 21:39:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:29.093 21:39:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:29.094 21:39:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:29.094 21:39:51 -- host/auth.sh@44 -- # digest=sha512 00:24:29.094 21:39:51 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:29.094 21:39:51 -- host/auth.sh@44 -- # keyid=0 00:24:29.094 21:39:51 -- host/auth.sh@45 -- # key=DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:29.094 21:39:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:29.094 21:39:51 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:29.094 21:39:51 -- host/auth.sh@49 -- # echo DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:29.094 21:39:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:24:29.094 21:39:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:29.094 21:39:51 -- host/auth.sh@68 -- # digest=sha512 00:24:29.094 21:39:51 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:29.094 21:39:51 -- host/auth.sh@68 -- # keyid=0 00:24:29.094 21:39:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:29.094 21:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.094 21:39:51 -- common/autotest_common.sh@10 -- # set +x 00:24:29.094 21:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.094 21:39:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:29.094 21:39:51 -- nvmf/common.sh@717 -- # local ip 00:24:29.094 21:39:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:29.094 21:39:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:29.094 21:39:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.094 21:39:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.094 21:39:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:29.094 21:39:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.094 21:39:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:29.094 21:39:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:29.094 21:39:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:29.094 21:39:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:29.094 21:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.094 21:39:51 -- common/autotest_common.sh@10 -- # set +x 00:24:29.660 nvme0n1 00:24:29.660 21:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.660 21:39:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.660 21:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.660 21:39:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:29.660 21:39:52 -- common/autotest_common.sh@10 -- # set +x 00:24:29.660 21:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.660 21:39:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.660 21:39:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.660 21:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.660 21:39:52 -- common/autotest_common.sh@10 -- # set +x 00:24:29.660 21:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.660 21:39:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:29.660 21:39:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:29.660 21:39:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:29.660 21:39:52 -- host/auth.sh@44 -- # digest=sha512 00:24:29.660 21:39:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:29.660 21:39:52 -- host/auth.sh@44 -- # keyid=1 00:24:29.660 21:39:52 -- host/auth.sh@45 -- # key=DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:29.660 21:39:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:29.660 21:39:52 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:29.660 21:39:52 -- host/auth.sh@49 -- # echo DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:29.660 21:39:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:24:29.660 21:39:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:29.660 21:39:52 -- host/auth.sh@68 -- # digest=sha512 00:24:29.660 21:39:52 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:29.660 21:39:52 -- host/auth.sh@68 -- # keyid=1 00:24:29.660 21:39:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:29.660 21:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.660 21:39:52 -- common/autotest_common.sh@10 -- # set +x 00:24:29.660 21:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.660 21:39:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:29.660 21:39:52 -- nvmf/common.sh@717 -- # local ip 00:24:29.660 21:39:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:29.660 21:39:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:29.660 21:39:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.660 21:39:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.660 21:39:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:29.660 21:39:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.660 21:39:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:29.660 21:39:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:29.660 21:39:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:29.660 21:39:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:29.660 21:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.660 21:39:52 -- common/autotest_common.sh@10 -- # set +x 00:24:29.918 nvme0n1 00:24:29.918 21:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.918 21:39:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:29.918 21:39:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.918 21:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.918 21:39:52 -- common/autotest_common.sh@10 -- # set +x 00:24:29.918 21:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.918 21:39:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.918 21:39:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.918 21:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.918 21:39:52 -- common/autotest_common.sh@10 -- # set +x 00:24:29.918 21:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.918 21:39:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:29.918 21:39:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:29.918 21:39:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:29.918 21:39:52 -- host/auth.sh@44 -- # digest=sha512 00:24:29.918 21:39:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:29.918 21:39:52 -- host/auth.sh@44 -- # keyid=2 00:24:29.918 21:39:52 -- host/auth.sh@45 -- # key=DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:29.918 21:39:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:29.918 21:39:52 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:29.918 21:39:52 -- host/auth.sh@49 -- # echo DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:29.918 21:39:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:24:29.918 21:39:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:29.918 21:39:52 -- host/auth.sh@68 -- # digest=sha512 00:24:29.918 21:39:52 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:29.918 21:39:52 -- host/auth.sh@68 -- # keyid=2 00:24:29.918 21:39:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:29.918 21:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.918 21:39:52 -- common/autotest_common.sh@10 -- # set +x 00:24:29.918 21:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.918 21:39:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:29.918 21:39:52 -- nvmf/common.sh@717 -- # local ip 00:24:29.918 21:39:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:29.918 21:39:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:29.918 21:39:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.918 21:39:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.918 21:39:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:29.918 21:39:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.918 21:39:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:29.918 21:39:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:29.918 21:39:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:29.918 21:39:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:29.918 21:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.919 21:39:52 -- common/autotest_common.sh@10 -- # set +x 00:24:30.486 nvme0n1 00:24:30.486 21:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.486 21:39:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.486 21:39:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:30.486 21:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.486 21:39:53 -- common/autotest_common.sh@10 -- # set +x 00:24:30.486 21:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.486 21:39:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.486 21:39:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.486 21:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.486 21:39:53 -- common/autotest_common.sh@10 -- # set +x 00:24:30.486 21:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.486 21:39:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:30.486 21:39:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:30.486 21:39:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:30.486 21:39:53 -- host/auth.sh@44 -- # digest=sha512 00:24:30.486 21:39:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:30.486 21:39:53 -- host/auth.sh@44 -- # keyid=3 00:24:30.486 21:39:53 -- host/auth.sh@45 -- # key=DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:30.486 21:39:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:30.486 21:39:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:30.486 21:39:53 -- host/auth.sh@49 -- # echo DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:30.486 21:39:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:24:30.486 21:39:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:30.486 21:39:53 -- host/auth.sh@68 -- # digest=sha512 00:24:30.486 21:39:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:30.486 21:39:53 -- host/auth.sh@68 -- # keyid=3 00:24:30.486 21:39:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:30.486 21:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.486 21:39:53 -- common/autotest_common.sh@10 -- # set +x 00:24:30.486 21:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.486 21:39:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:30.486 21:39:53 -- nvmf/common.sh@717 -- # local ip 00:24:30.486 21:39:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:30.486 21:39:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:30.486 21:39:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.486 21:39:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.486 21:39:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:30.486 21:39:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.486 21:39:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:30.486 21:39:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:30.486 21:39:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:30.486 21:39:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:30.486 21:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.486 21:39:53 -- common/autotest_common.sh@10 -- # set +x 00:24:30.744 nvme0n1 00:24:30.744 21:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.744 21:39:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:30.744 21:39:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.744 21:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.744 21:39:53 -- common/autotest_common.sh@10 -- # set +x 00:24:30.744 21:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.744 21:39:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.744 21:39:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.744 21:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.744 21:39:53 -- common/autotest_common.sh@10 -- # set +x 00:24:31.002 21:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.002 21:39:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:31.002 21:39:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:31.002 21:39:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:31.002 21:39:53 -- host/auth.sh@44 -- # digest=sha512 00:24:31.002 21:39:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:31.002 21:39:53 -- host/auth.sh@44 -- # keyid=4 00:24:31.002 21:39:53 -- host/auth.sh@45 -- # key=DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:31.002 21:39:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:31.002 21:39:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:31.002 21:39:53 -- host/auth.sh@49 -- # echo DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:31.002 21:39:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:24:31.002 21:39:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:31.002 21:39:53 -- host/auth.sh@68 -- # digest=sha512 00:24:31.002 21:39:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:31.002 21:39:53 -- host/auth.sh@68 -- # keyid=4 00:24:31.002 21:39:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:31.002 21:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.002 21:39:53 -- common/autotest_common.sh@10 -- # set +x 00:24:31.002 21:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.002 21:39:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:31.002 21:39:53 -- nvmf/common.sh@717 -- # local ip 00:24:31.002 21:39:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:31.002 21:39:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:31.002 21:39:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.002 21:39:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.002 21:39:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:31.002 21:39:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.002 21:39:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:31.002 21:39:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:31.002 21:39:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:31.002 21:39:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:31.002 21:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.002 21:39:53 -- common/autotest_common.sh@10 -- # set +x 00:24:31.263 nvme0n1 00:24:31.263 21:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.263 21:39:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.263 21:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.263 21:39:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:31.263 21:39:54 -- common/autotest_common.sh@10 -- # set +x 00:24:31.263 21:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.263 21:39:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.263 21:39:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.263 21:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.263 21:39:54 -- common/autotest_common.sh@10 -- # set +x 00:24:31.263 21:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.263 21:39:54 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:31.263 21:39:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:31.263 21:39:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:31.263 21:39:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:31.263 21:39:54 -- host/auth.sh@44 -- # digest=sha512 00:24:31.263 21:39:54 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:31.263 21:39:54 -- host/auth.sh@44 -- # keyid=0 00:24:31.263 21:39:54 -- host/auth.sh@45 -- # key=DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:31.263 21:39:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:31.263 21:39:54 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:31.263 21:39:54 -- host/auth.sh@49 -- # echo DHHC-1:00:NTZhMTljODIyMmE2NTA5OWU2Nzc5MDQ3NDE3N2ZhMGFIYZvZ: 00:24:31.263 21:39:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:24:31.263 21:39:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:31.263 21:39:54 -- host/auth.sh@68 -- # digest=sha512 00:24:31.263 21:39:54 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:31.263 21:39:54 -- host/auth.sh@68 -- # keyid=0 00:24:31.263 21:39:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:31.263 21:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.263 21:39:54 -- common/autotest_common.sh@10 -- # set +x 00:24:31.263 21:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.263 21:39:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:31.263 21:39:54 -- nvmf/common.sh@717 -- # local ip 00:24:31.263 21:39:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:31.263 21:39:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:31.263 21:39:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.263 21:39:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.263 21:39:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:31.263 21:39:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.263 21:39:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:31.263 21:39:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:31.263 21:39:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:31.263 21:39:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:31.263 21:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.263 21:39:54 -- common/autotest_common.sh@10 -- # set +x 00:24:31.830 nvme0n1 00:24:31.830 21:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.830 21:39:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.830 21:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.830 21:39:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:31.830 21:39:54 -- common/autotest_common.sh@10 -- # set +x 00:24:31.830 21:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.830 21:39:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.830 21:39:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.830 21:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.830 21:39:54 -- common/autotest_common.sh@10 -- # set +x 00:24:31.830 21:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.830 21:39:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:31.830 21:39:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:31.830 21:39:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:31.830 21:39:54 -- host/auth.sh@44 -- # digest=sha512 00:24:31.830 21:39:54 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:31.830 21:39:54 -- host/auth.sh@44 -- # keyid=1 00:24:31.830 21:39:54 -- host/auth.sh@45 -- # key=DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:31.830 21:39:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:31.830 21:39:54 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:31.830 21:39:54 -- host/auth.sh@49 -- # echo DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:31.830 21:39:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:24:31.830 21:39:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:31.830 21:39:54 -- host/auth.sh@68 -- # digest=sha512 00:24:31.830 21:39:54 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:31.830 21:39:54 -- host/auth.sh@68 -- # keyid=1 00:24:31.830 21:39:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:31.830 21:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.830 21:39:54 -- common/autotest_common.sh@10 -- # set +x 00:24:31.830 21:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.830 21:39:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:31.830 21:39:54 -- nvmf/common.sh@717 -- # local ip 00:24:31.830 21:39:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:31.830 21:39:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:31.830 21:39:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.831 21:39:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.831 21:39:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:31.831 21:39:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.831 21:39:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:31.831 21:39:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:31.831 21:39:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:32.089 21:39:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:32.089 21:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.089 21:39:54 -- common/autotest_common.sh@10 -- # set +x 00:24:32.656 nvme0n1 00:24:32.656 21:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.656 21:39:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.656 21:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.656 21:39:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:32.656 21:39:55 -- common/autotest_common.sh@10 -- # set +x 00:24:32.656 21:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.656 21:39:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.656 21:39:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.656 21:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.656 21:39:55 -- common/autotest_common.sh@10 -- # set +x 00:24:32.656 21:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.656 21:39:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:32.656 21:39:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:32.656 21:39:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:32.656 21:39:55 -- host/auth.sh@44 -- # digest=sha512 00:24:32.656 21:39:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:32.656 21:39:55 -- host/auth.sh@44 -- # keyid=2 00:24:32.656 21:39:55 -- host/auth.sh@45 -- # key=DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:32.656 21:39:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:32.656 21:39:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:32.656 21:39:55 -- host/auth.sh@49 -- # echo DHHC-1:01:YzFiNTUwNTlhZjgzMWFkNjFmMTQyYzI5ZWMzZmYzM2V7lKRX: 00:24:32.656 21:39:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:24:32.656 21:39:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:32.656 21:39:55 -- host/auth.sh@68 -- # digest=sha512 00:24:32.656 21:39:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:32.656 21:39:55 -- host/auth.sh@68 -- # keyid=2 00:24:32.656 21:39:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:32.656 21:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.656 21:39:55 -- common/autotest_common.sh@10 -- # set +x 00:24:32.656 21:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.656 21:39:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:32.656 21:39:55 -- nvmf/common.sh@717 -- # local ip 00:24:32.656 21:39:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:32.656 21:39:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:32.656 21:39:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.656 21:39:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.656 21:39:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:32.656 21:39:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.656 21:39:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:32.656 21:39:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:32.656 21:39:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:32.656 21:39:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:32.656 21:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.656 21:39:55 -- common/autotest_common.sh@10 -- # set +x 00:24:33.223 nvme0n1 00:24:33.223 21:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.223 21:39:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:33.223 21:39:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.223 21:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.223 21:39:55 -- common/autotest_common.sh@10 -- # set +x 00:24:33.223 21:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.223 21:39:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.223 21:39:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.223 21:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.223 21:39:55 -- common/autotest_common.sh@10 -- # set +x 00:24:33.223 21:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.223 21:39:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:33.223 21:39:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:33.223 21:39:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:33.223 21:39:55 -- host/auth.sh@44 -- # digest=sha512 00:24:33.223 21:39:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:33.223 21:39:55 -- host/auth.sh@44 -- # keyid=3 00:24:33.223 21:39:55 -- host/auth.sh@45 -- # key=DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:33.223 21:39:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:33.223 21:39:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:33.224 21:39:55 -- host/auth.sh@49 -- # echo DHHC-1:02:M2FhZmFiYTk1MjE2MTg0OThhYzE0N2E1ZTQwYzdmNzlmZTkxNmMxZmQxNjFhMTI2btogDA==: 00:24:33.224 21:39:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:24:33.224 21:39:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:33.224 21:39:55 -- host/auth.sh@68 -- # digest=sha512 00:24:33.224 21:39:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:33.224 21:39:55 -- host/auth.sh@68 -- # keyid=3 00:24:33.224 21:39:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:33.224 21:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.224 21:39:55 -- common/autotest_common.sh@10 -- # set +x 00:24:33.224 21:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.224 21:39:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:33.224 21:39:55 -- nvmf/common.sh@717 -- # local ip 00:24:33.224 21:39:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:33.224 21:39:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:33.224 21:39:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.224 21:39:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.224 21:39:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:33.224 21:39:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.224 21:39:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:33.224 21:39:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:33.224 21:39:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:33.224 21:39:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:33.224 21:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.224 21:39:55 -- common/autotest_common.sh@10 -- # set +x 00:24:33.791 nvme0n1 00:24:33.791 21:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.791 21:39:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.791 21:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.791 21:39:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:33.791 21:39:56 -- common/autotest_common.sh@10 -- # set +x 00:24:33.791 21:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.791 21:39:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.791 21:39:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.791 21:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.791 21:39:56 -- common/autotest_common.sh@10 -- # set +x 00:24:33.791 21:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.791 21:39:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:33.791 21:39:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:33.791 21:39:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:33.791 21:39:56 -- host/auth.sh@44 -- # digest=sha512 00:24:33.791 21:39:56 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:33.791 21:39:56 -- host/auth.sh@44 -- # keyid=4 00:24:33.791 21:39:56 -- host/auth.sh@45 -- # key=DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:33.791 21:39:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:33.791 21:39:56 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:33.791 21:39:56 -- host/auth.sh@49 -- # echo DHHC-1:03:YjVlZjc2NTRkODNlZmYxMWMxZjZhOTRmMDQwNzA1M2UzYjY1MWZkODY5YTc3YzYwOTdjYzQ3NmQyY2JiYTA1NGk+G6o=: 00:24:33.791 21:39:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:24:33.791 21:39:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:33.791 21:39:56 -- host/auth.sh@68 -- # digest=sha512 00:24:33.791 21:39:56 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:33.791 21:39:56 -- host/auth.sh@68 -- # keyid=4 00:24:33.791 21:39:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:33.791 21:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.791 21:39:56 -- common/autotest_common.sh@10 -- # set +x 00:24:33.792 21:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.792 21:39:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:33.792 21:39:56 -- nvmf/common.sh@717 -- # local ip 00:24:33.792 21:39:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:33.792 21:39:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:33.792 21:39:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.792 21:39:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.792 21:39:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:33.792 21:39:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.792 21:39:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:33.792 21:39:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:33.792 21:39:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:33.792 21:39:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:33.792 21:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.792 21:39:56 -- common/autotest_common.sh@10 -- # set +x 00:24:34.357 nvme0n1 00:24:34.357 21:39:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.357 21:39:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.357 21:39:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.357 21:39:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:34.357 21:39:57 -- common/autotest_common.sh@10 -- # set +x 00:24:34.357 21:39:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.357 21:39:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.357 21:39:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.357 21:39:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.357 21:39:57 -- common/autotest_common.sh@10 -- # set +x 00:24:34.357 21:39:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.357 21:39:57 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:34.357 21:39:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:34.357 21:39:57 -- host/auth.sh@44 -- # digest=sha256 00:24:34.357 21:39:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.357 21:39:57 -- host/auth.sh@44 -- # keyid=1 00:24:34.357 21:39:57 -- host/auth.sh@45 -- # key=DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:34.357 21:39:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:34.357 21:39:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:34.357 21:39:57 -- host/auth.sh@49 -- # echo DHHC-1:00:NjhiNzg1ZTQzYjM3MGU0NTk2NmFjMDE1NDBiNmM0YWU4MWMxZWIxZWI0ZGQ2ZjU3SoFGIQ==: 00:24:34.357 21:39:57 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:34.357 21:39:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.357 21:39:57 -- common/autotest_common.sh@10 -- # set +x 00:24:34.357 21:39:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.357 21:39:57 -- host/auth.sh@119 -- # get_main_ns_ip 00:24:34.357 21:39:57 -- nvmf/common.sh@717 -- # local ip 00:24:34.357 21:39:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:34.357 21:39:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:34.357 21:39:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.357 21:39:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.357 21:39:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:34.357 21:39:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.357 21:39:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:34.357 21:39:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:34.357 21:39:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:34.617 21:39:57 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:34.617 21:39:57 -- common/autotest_common.sh@638 -- # local es=0 00:24:34.617 21:39:57 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:34.617 21:39:57 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:34.617 21:39:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:34.617 21:39:57 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:34.617 21:39:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:34.617 21:39:57 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:34.617 21:39:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.617 21:39:57 -- common/autotest_common.sh@10 -- # set +x 00:24:34.617 request: 00:24:34.617 { 00:24:34.617 "name": "nvme0", 00:24:34.617 "trtype": "tcp", 00:24:34.617 "traddr": "10.0.0.1", 00:24:34.617 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:34.617 "adrfam": "ipv4", 00:24:34.617 "trsvcid": "4420", 00:24:34.617 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:34.617 "method": "bdev_nvme_attach_controller", 00:24:34.617 "req_id": 1 00:24:34.617 } 00:24:34.617 Got JSON-RPC error response 00:24:34.617 response: 00:24:34.617 { 00:24:34.617 "code": -32602, 00:24:34.617 "message": "Invalid parameters" 00:24:34.617 } 00:24:34.617 21:39:57 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:34.617 21:39:57 -- common/autotest_common.sh@641 -- # es=1 00:24:34.617 21:39:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:34.617 21:39:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:34.617 21:39:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:34.617 21:39:57 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.617 21:39:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.617 21:39:57 -- common/autotest_common.sh@10 -- # set +x 00:24:34.617 21:39:57 -- host/auth.sh@121 -- # jq length 00:24:34.617 21:39:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.617 21:39:57 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:24:34.617 21:39:57 -- host/auth.sh@124 -- # get_main_ns_ip 00:24:34.617 21:39:57 -- nvmf/common.sh@717 -- # local ip 00:24:34.617 21:39:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:34.617 21:39:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:34.617 21:39:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.617 21:39:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.617 21:39:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:34.617 21:39:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.617 21:39:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:34.617 21:39:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:34.617 21:39:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:34.617 21:39:57 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:34.617 21:39:57 -- common/autotest_common.sh@638 -- # local es=0 00:24:34.617 21:39:57 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:34.617 21:39:57 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:34.617 21:39:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:34.617 21:39:57 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:34.617 21:39:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:34.617 21:39:57 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:34.617 21:39:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.617 21:39:57 -- common/autotest_common.sh@10 -- # set +x 00:24:34.617 request: 00:24:34.617 { 00:24:34.617 "name": "nvme0", 00:24:34.617 "trtype": "tcp", 00:24:34.617 "traddr": "10.0.0.1", 00:24:34.617 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:34.617 "adrfam": "ipv4", 00:24:34.617 "trsvcid": "4420", 00:24:34.617 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:34.617 "dhchap_key": "key2", 00:24:34.617 "method": "bdev_nvme_attach_controller", 00:24:34.617 "req_id": 1 00:24:34.617 } 00:24:34.617 Got JSON-RPC error response 00:24:34.617 response: 00:24:34.617 { 00:24:34.617 "code": -32602, 00:24:34.617 "message": "Invalid parameters" 00:24:34.617 } 00:24:34.617 21:39:57 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:34.617 21:39:57 -- common/autotest_common.sh@641 -- # es=1 00:24:34.617 21:39:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:34.617 21:39:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:34.617 21:39:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:34.617 21:39:57 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.617 21:39:57 -- host/auth.sh@127 -- # jq length 00:24:34.617 21:39:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.617 21:39:57 -- common/autotest_common.sh@10 -- # set +x 00:24:34.617 21:39:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.617 21:39:57 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:24:34.617 21:39:57 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:24:34.617 21:39:57 -- host/auth.sh@130 -- # cleanup 00:24:34.617 21:39:57 -- host/auth.sh@24 -- # nvmftestfini 00:24:34.617 21:39:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:34.617 21:39:57 -- nvmf/common.sh@117 -- # sync 00:24:34.617 21:39:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:34.617 21:39:57 -- nvmf/common.sh@120 -- # set +e 00:24:34.617 21:39:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:34.617 21:39:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:34.617 rmmod nvme_tcp 00:24:34.876 rmmod nvme_fabrics 00:24:34.876 21:39:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:34.876 21:39:57 -- nvmf/common.sh@124 -- # set -e 00:24:34.876 21:39:57 -- nvmf/common.sh@125 -- # return 0 00:24:34.876 21:39:57 -- nvmf/common.sh@478 -- # '[' -n 2973107 ']' 00:24:34.876 21:39:57 -- nvmf/common.sh@479 -- # killprocess 2973107 00:24:34.876 21:39:57 -- common/autotest_common.sh@936 -- # '[' -z 2973107 ']' 00:24:34.877 21:39:57 -- common/autotest_common.sh@940 -- # kill -0 2973107 00:24:34.877 21:39:57 -- common/autotest_common.sh@941 -- # uname 00:24:34.877 21:39:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:34.877 21:39:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2973107 00:24:34.877 21:39:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:34.877 21:39:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:34.877 21:39:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2973107' 00:24:34.877 killing process with pid 2973107 00:24:34.877 21:39:57 -- common/autotest_common.sh@955 -- # kill 2973107 00:24:34.877 21:39:57 -- common/autotest_common.sh@960 -- # wait 2973107 00:24:35.136 21:39:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:35.136 21:39:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:35.136 21:39:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:35.136 21:39:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:35.136 21:39:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:35.136 21:39:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.136 21:39:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.136 21:39:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.039 21:39:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:37.039 21:39:59 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:37.039 21:39:59 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:37.039 21:39:59 -- host/auth.sh@27 -- # clean_kernel_target 00:24:37.039 21:39:59 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:37.039 21:39:59 -- nvmf/common.sh@675 -- # echo 0 00:24:37.039 21:39:59 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:37.039 21:39:59 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:37.039 21:39:59 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:37.039 21:39:59 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:37.039 21:39:59 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:24:37.039 21:39:59 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:24:37.299 21:39:59 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:40.589 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:40.589 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:40.589 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:40.589 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:40.589 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:40.589 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:40.589 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:40.589 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:40.589 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:40.589 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:40.589 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:40.589 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:40.589 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:40.589 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:40.589 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:40.589 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:41.968 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:24:42.227 21:40:04 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.dPN /tmp/spdk.key-null.E2H /tmp/spdk.key-sha256.Gex /tmp/spdk.key-sha384.U9L /tmp/spdk.key-sha512.vkC /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:42.227 21:40:04 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:45.516 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:24:45.516 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:24:45.516 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:24:45.516 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:24:45.516 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:24:45.516 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:24:45.516 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:24:45.516 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:24:45.516 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:24:45.516 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:24:45.516 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:24:45.516 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:24:45.516 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:24:45.516 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:24:45.516 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:24:45.516 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:24:45.516 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:45.516 00:24:45.516 real 0m52.439s 00:24:45.516 user 0m44.563s 00:24:45.516 sys 0m14.810s 00:24:45.516 21:40:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:45.516 21:40:07 -- common/autotest_common.sh@10 -- # set +x 00:24:45.516 ************************************ 00:24:45.516 END TEST nvmf_auth 00:24:45.516 ************************************ 00:24:45.516 21:40:07 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:24:45.516 21:40:07 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:45.516 21:40:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:45.516 21:40:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:45.516 21:40:07 -- common/autotest_common.sh@10 -- # set +x 00:24:45.516 ************************************ 00:24:45.516 START TEST nvmf_digest 00:24:45.516 ************************************ 00:24:45.516 21:40:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:45.516 * Looking for test storage... 00:24:45.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:45.516 21:40:08 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.516 21:40:08 -- nvmf/common.sh@7 -- # uname -s 00:24:45.516 21:40:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.516 21:40:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.516 21:40:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.516 21:40:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.516 21:40:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.516 21:40:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.516 21:40:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.516 21:40:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.516 21:40:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.516 21:40:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.516 21:40:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:45.516 21:40:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:45.516 21:40:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.516 21:40:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.516 21:40:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.516 21:40:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.516 21:40:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.516 21:40:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.516 21:40:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.516 21:40:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.517 21:40:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.517 21:40:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.517 21:40:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.517 21:40:08 -- paths/export.sh@5 -- # export PATH 00:24:45.517 21:40:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.517 21:40:08 -- nvmf/common.sh@47 -- # : 0 00:24:45.517 21:40:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:45.517 21:40:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:45.517 21:40:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.517 21:40:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.517 21:40:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.517 21:40:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:45.517 21:40:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:45.517 21:40:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:45.517 21:40:08 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:45.517 21:40:08 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:45.517 21:40:08 -- host/digest.sh@16 -- # runtime=2 00:24:45.517 21:40:08 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:45.517 21:40:08 -- host/digest.sh@138 -- # nvmftestinit 00:24:45.517 21:40:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:45.517 21:40:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.517 21:40:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:45.517 21:40:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:45.517 21:40:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:45.517 21:40:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.517 21:40:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.517 21:40:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.517 21:40:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:45.517 21:40:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:45.517 21:40:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:45.517 21:40:08 -- common/autotest_common.sh@10 -- # set +x 00:24:52.114 21:40:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:52.114 21:40:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:52.114 21:40:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:52.114 21:40:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:52.114 21:40:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:52.114 21:40:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:52.114 21:40:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:52.114 21:40:14 -- nvmf/common.sh@295 -- # net_devs=() 00:24:52.114 21:40:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:52.114 21:40:14 -- nvmf/common.sh@296 -- # e810=() 00:24:52.114 21:40:14 -- nvmf/common.sh@296 -- # local -ga e810 00:24:52.114 21:40:14 -- nvmf/common.sh@297 -- # x722=() 00:24:52.114 21:40:14 -- nvmf/common.sh@297 -- # local -ga x722 00:24:52.114 21:40:14 -- nvmf/common.sh@298 -- # mlx=() 00:24:52.114 21:40:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:52.114 21:40:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.114 21:40:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.114 21:40:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.114 21:40:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.114 21:40:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.114 21:40:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.114 21:40:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.114 21:40:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.114 21:40:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.114 21:40:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.114 21:40:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.114 21:40:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:52.114 21:40:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:52.114 21:40:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:52.114 21:40:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:52.114 21:40:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:52.114 21:40:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:52.114 21:40:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.114 21:40:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:52.114 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:52.114 21:40:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.114 21:40:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.115 21:40:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.115 21:40:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.115 21:40:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.115 21:40:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.115 21:40:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:52.115 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:52.115 21:40:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.115 21:40:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.115 21:40:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.115 21:40:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.115 21:40:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.115 21:40:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:52.115 21:40:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:52.115 21:40:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:52.115 21:40:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.115 21:40:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.115 21:40:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:52.115 21:40:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.115 21:40:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:52.115 Found net devices under 0000:af:00.0: cvl_0_0 00:24:52.115 21:40:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.115 21:40:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.115 21:40:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.115 21:40:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:52.115 21:40:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.115 21:40:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:52.115 Found net devices under 0000:af:00.1: cvl_0_1 00:24:52.115 21:40:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.115 21:40:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:52.115 21:40:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:52.115 21:40:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:52.115 21:40:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:52.115 21:40:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:52.115 21:40:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.115 21:40:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.115 21:40:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.115 21:40:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:52.115 21:40:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.115 21:40:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.115 21:40:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:52.115 21:40:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.115 21:40:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.115 21:40:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:52.115 21:40:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:52.115 21:40:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.115 21:40:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.115 21:40:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.115 21:40:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.115 21:40:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:52.115 21:40:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.115 21:40:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.115 21:40:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.115 21:40:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:52.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:24:52.115 00:24:52.115 --- 10.0.0.2 ping statistics --- 00:24:52.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.115 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:24:52.115 21:40:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:24:52.115 00:24:52.115 --- 10.0.0.1 ping statistics --- 00:24:52.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.115 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:24:52.115 21:40:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.115 21:40:14 -- nvmf/common.sh@411 -- # return 0 00:24:52.115 21:40:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:52.115 21:40:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.115 21:40:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:52.115 21:40:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:52.115 21:40:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.115 21:40:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:52.115 21:40:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:52.115 21:40:14 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:52.115 21:40:14 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:52.115 21:40:14 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:52.115 21:40:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:52.115 21:40:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:52.115 21:40:14 -- common/autotest_common.sh@10 -- # set +x 00:24:52.374 ************************************ 00:24:52.374 START TEST nvmf_digest_clean 00:24:52.374 ************************************ 00:24:52.374 21:40:15 -- common/autotest_common.sh@1111 -- # run_digest 00:24:52.374 21:40:15 -- host/digest.sh@120 -- # local dsa_initiator 00:24:52.374 21:40:15 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:52.374 21:40:15 -- host/digest.sh@121 -- # dsa_initiator=false 00:24:52.374 21:40:15 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:52.374 21:40:15 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:52.374 21:40:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:52.374 21:40:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:52.374 21:40:15 -- common/autotest_common.sh@10 -- # set +x 00:24:52.374 21:40:15 -- nvmf/common.sh@470 -- # nvmfpid=2987008 00:24:52.374 21:40:15 -- nvmf/common.sh@471 -- # waitforlisten 2987008 00:24:52.374 21:40:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:52.374 21:40:15 -- common/autotest_common.sh@817 -- # '[' -z 2987008 ']' 00:24:52.374 21:40:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.374 21:40:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:52.374 21:40:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.374 21:40:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:52.374 21:40:15 -- common/autotest_common.sh@10 -- # set +x 00:24:52.374 [2024-04-24 21:40:15.112149] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:24:52.374 [2024-04-24 21:40:15.112192] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.374 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.374 [2024-04-24 21:40:15.185890] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.374 [2024-04-24 21:40:15.256808] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.374 [2024-04-24 21:40:15.256850] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.374 [2024-04-24 21:40:15.256861] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.374 [2024-04-24 21:40:15.256870] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.374 [2024-04-24 21:40:15.256877] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.374 [2024-04-24 21:40:15.256909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.309 21:40:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:53.309 21:40:15 -- common/autotest_common.sh@850 -- # return 0 00:24:53.309 21:40:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:53.309 21:40:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:53.309 21:40:15 -- common/autotest_common.sh@10 -- # set +x 00:24:53.309 21:40:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.309 21:40:15 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:53.309 21:40:15 -- host/digest.sh@126 -- # common_target_config 00:24:53.309 21:40:15 -- host/digest.sh@43 -- # rpc_cmd 00:24:53.309 21:40:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.309 21:40:15 -- common/autotest_common.sh@10 -- # set +x 00:24:53.309 null0 00:24:53.309 [2024-04-24 21:40:16.024527] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.309 [2024-04-24 21:40:16.048743] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.309 21:40:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.309 21:40:16 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:53.309 21:40:16 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:53.309 21:40:16 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:53.309 21:40:16 -- host/digest.sh@80 -- # rw=randread 00:24:53.309 21:40:16 -- host/digest.sh@80 -- # bs=4096 00:24:53.309 21:40:16 -- host/digest.sh@80 -- # qd=128 00:24:53.309 21:40:16 -- host/digest.sh@80 -- # scan_dsa=false 00:24:53.309 21:40:16 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:53.309 21:40:16 -- host/digest.sh@83 -- # bperfpid=2987079 00:24:53.309 21:40:16 -- host/digest.sh@84 -- # waitforlisten 2987079 /var/tmp/bperf.sock 00:24:53.309 21:40:16 -- common/autotest_common.sh@817 -- # '[' -z 2987079 ']' 00:24:53.309 21:40:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:53.309 21:40:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:53.309 21:40:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:53.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:53.309 21:40:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:53.309 21:40:16 -- common/autotest_common.sh@10 -- # set +x 00:24:53.309 [2024-04-24 21:40:16.083722] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:24:53.309 [2024-04-24 21:40:16.083767] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2987079 ] 00:24:53.309 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.309 [2024-04-24 21:40:16.154725] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.567 [2024-04-24 21:40:16.229438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.133 21:40:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:54.133 21:40:16 -- common/autotest_common.sh@850 -- # return 0 00:24:54.133 21:40:16 -- host/digest.sh@86 -- # false 00:24:54.133 21:40:16 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:54.133 21:40:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:54.391 21:40:17 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:54.391 21:40:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:54.649 nvme0n1 00:24:54.907 21:40:17 -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:54.907 21:40:17 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:54.907 Running I/O for 2 seconds... 00:24:56.809 00:24:56.809 Latency(us) 00:24:56.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.809 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:56.809 nvme0n1 : 2.00 27326.06 106.74 0.00 0.00 4679.03 2110.26 22020.10 00:24:56.809 =================================================================================================================== 00:24:56.809 Total : 27326.06 106.74 0.00 0.00 4679.03 2110.26 22020.10 00:24:56.809 0 00:24:56.809 21:40:19 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:56.809 21:40:19 -- host/digest.sh@93 -- # get_accel_stats 00:24:56.809 21:40:19 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:56.809 21:40:19 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:56.809 | select(.opcode=="crc32c") 00:24:56.809 | "\(.module_name) \(.executed)"' 00:24:56.809 21:40:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:57.068 21:40:19 -- host/digest.sh@94 -- # false 00:24:57.068 21:40:19 -- host/digest.sh@94 -- # exp_module=software 00:24:57.068 21:40:19 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:57.068 21:40:19 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:57.068 21:40:19 -- host/digest.sh@98 -- # killprocess 2987079 00:24:57.068 21:40:19 -- common/autotest_common.sh@936 -- # '[' -z 2987079 ']' 00:24:57.068 21:40:19 -- common/autotest_common.sh@940 -- # kill -0 2987079 00:24:57.068 21:40:19 -- common/autotest_common.sh@941 -- # uname 00:24:57.068 21:40:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:57.068 21:40:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2987079 00:24:57.068 21:40:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:57.068 21:40:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:57.068 21:40:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2987079' 00:24:57.068 killing process with pid 2987079 00:24:57.068 21:40:19 -- common/autotest_common.sh@955 -- # kill 2987079 00:24:57.068 Received shutdown signal, test time was about 2.000000 seconds 00:24:57.068 00:24:57.068 Latency(us) 00:24:57.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.068 =================================================================================================================== 00:24:57.068 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:57.068 21:40:19 -- common/autotest_common.sh@960 -- # wait 2987079 00:24:57.327 21:40:20 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:57.327 21:40:20 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:57.327 21:40:20 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:57.327 21:40:20 -- host/digest.sh@80 -- # rw=randread 00:24:57.327 21:40:20 -- host/digest.sh@80 -- # bs=131072 00:24:57.327 21:40:20 -- host/digest.sh@80 -- # qd=16 00:24:57.327 21:40:20 -- host/digest.sh@80 -- # scan_dsa=false 00:24:57.327 21:40:20 -- host/digest.sh@83 -- # bperfpid=2987839 00:24:57.327 21:40:20 -- host/digest.sh@84 -- # waitforlisten 2987839 /var/tmp/bperf.sock 00:24:57.327 21:40:20 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:57.327 21:40:20 -- common/autotest_common.sh@817 -- # '[' -z 2987839 ']' 00:24:57.327 21:40:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:57.327 21:40:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:57.327 21:40:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:57.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:57.327 21:40:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:57.327 21:40:20 -- common/autotest_common.sh@10 -- # set +x 00:24:57.327 [2024-04-24 21:40:20.107321] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:24:57.327 [2024-04-24 21:40:20.107374] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2987839 ] 00:24:57.327 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:57.327 Zero copy mechanism will not be used. 00:24:57.328 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.328 [2024-04-24 21:40:20.178111] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.586 [2024-04-24 21:40:20.253075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.154 21:40:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:58.154 21:40:20 -- common/autotest_common.sh@850 -- # return 0 00:24:58.154 21:40:20 -- host/digest.sh@86 -- # false 00:24:58.154 21:40:20 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:58.154 21:40:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:58.412 21:40:21 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:58.412 21:40:21 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:58.671 nvme0n1 00:24:58.671 21:40:21 -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:58.671 21:40:21 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:58.671 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:58.671 Zero copy mechanism will not be used. 00:24:58.671 Running I/O for 2 seconds... 00:25:00.577 00:25:00.577 Latency(us) 00:25:00.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.577 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:00.577 nvme0n1 : 2.00 2588.32 323.54 0.00 0.00 6180.03 5400.17 29779.56 00:25:00.577 =================================================================================================================== 00:25:00.577 Total : 2588.32 323.54 0.00 0.00 6180.03 5400.17 29779.56 00:25:00.577 0 00:25:00.837 21:40:23 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:00.837 21:40:23 -- host/digest.sh@93 -- # get_accel_stats 00:25:00.837 21:40:23 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:00.837 21:40:23 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:00.837 21:40:23 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:00.837 | select(.opcode=="crc32c") 00:25:00.837 | "\(.module_name) \(.executed)"' 00:25:00.837 21:40:23 -- host/digest.sh@94 -- # false 00:25:00.837 21:40:23 -- host/digest.sh@94 -- # exp_module=software 00:25:00.837 21:40:23 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:00.837 21:40:23 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:00.837 21:40:23 -- host/digest.sh@98 -- # killprocess 2987839 00:25:00.837 21:40:23 -- common/autotest_common.sh@936 -- # '[' -z 2987839 ']' 00:25:00.837 21:40:23 -- common/autotest_common.sh@940 -- # kill -0 2987839 00:25:00.837 21:40:23 -- common/autotest_common.sh@941 -- # uname 00:25:00.837 21:40:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:00.837 21:40:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2987839 00:25:00.837 21:40:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:00.837 21:40:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:00.837 21:40:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2987839' 00:25:00.837 killing process with pid 2987839 00:25:00.837 21:40:23 -- common/autotest_common.sh@955 -- # kill 2987839 00:25:00.837 Received shutdown signal, test time was about 2.000000 seconds 00:25:00.837 00:25:00.837 Latency(us) 00:25:00.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.837 =================================================================================================================== 00:25:00.837 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:00.837 21:40:23 -- common/autotest_common.sh@960 -- # wait 2987839 00:25:01.100 21:40:23 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:01.100 21:40:23 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:01.100 21:40:23 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:01.100 21:40:23 -- host/digest.sh@80 -- # rw=randwrite 00:25:01.100 21:40:23 -- host/digest.sh@80 -- # bs=4096 00:25:01.100 21:40:23 -- host/digest.sh@80 -- # qd=128 00:25:01.100 21:40:23 -- host/digest.sh@80 -- # scan_dsa=false 00:25:01.100 21:40:23 -- host/digest.sh@83 -- # bperfpid=2988388 00:25:01.100 21:40:23 -- host/digest.sh@84 -- # waitforlisten 2988388 /var/tmp/bperf.sock 00:25:01.100 21:40:23 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:01.100 21:40:23 -- common/autotest_common.sh@817 -- # '[' -z 2988388 ']' 00:25:01.100 21:40:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:01.100 21:40:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:01.100 21:40:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:01.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:01.100 21:40:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:01.100 21:40:23 -- common/autotest_common.sh@10 -- # set +x 00:25:01.100 [2024-04-24 21:40:23.961076] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:25:01.100 [2024-04-24 21:40:23.961129] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2988388 ] 00:25:01.373 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.373 [2024-04-24 21:40:24.033274] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.373 [2024-04-24 21:40:24.103876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.940 21:40:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:01.940 21:40:24 -- common/autotest_common.sh@850 -- # return 0 00:25:01.940 21:40:24 -- host/digest.sh@86 -- # false 00:25:01.940 21:40:24 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:01.940 21:40:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:02.198 21:40:24 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:02.198 21:40:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:02.456 nvme0n1 00:25:02.456 21:40:25 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:02.456 21:40:25 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:02.714 Running I/O for 2 seconds... 00:25:04.614 00:25:04.614 Latency(us) 00:25:04.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.614 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:04.614 nvme0n1 : 2.00 28321.99 110.63 0.00 0.00 4514.03 3250.59 26004.68 00:25:04.614 =================================================================================================================== 00:25:04.614 Total : 28321.99 110.63 0.00 0.00 4514.03 3250.59 26004.68 00:25:04.614 0 00:25:04.614 21:40:27 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:04.614 21:40:27 -- host/digest.sh@93 -- # get_accel_stats 00:25:04.614 21:40:27 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:04.614 21:40:27 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:04.614 | select(.opcode=="crc32c") 00:25:04.614 | "\(.module_name) \(.executed)"' 00:25:04.614 21:40:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:04.873 21:40:27 -- host/digest.sh@94 -- # false 00:25:04.873 21:40:27 -- host/digest.sh@94 -- # exp_module=software 00:25:04.873 21:40:27 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:04.873 21:40:27 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:04.873 21:40:27 -- host/digest.sh@98 -- # killprocess 2988388 00:25:04.873 21:40:27 -- common/autotest_common.sh@936 -- # '[' -z 2988388 ']' 00:25:04.873 21:40:27 -- common/autotest_common.sh@940 -- # kill -0 2988388 00:25:04.873 21:40:27 -- common/autotest_common.sh@941 -- # uname 00:25:04.873 21:40:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:04.873 21:40:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2988388 00:25:04.873 21:40:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:04.873 21:40:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:04.873 21:40:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2988388' 00:25:04.873 killing process with pid 2988388 00:25:04.873 21:40:27 -- common/autotest_common.sh@955 -- # kill 2988388 00:25:04.873 Received shutdown signal, test time was about 2.000000 seconds 00:25:04.873 00:25:04.873 Latency(us) 00:25:04.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.873 =================================================================================================================== 00:25:04.873 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.873 21:40:27 -- common/autotest_common.sh@960 -- # wait 2988388 00:25:05.131 21:40:27 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:05.131 21:40:27 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:05.131 21:40:27 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:05.131 21:40:27 -- host/digest.sh@80 -- # rw=randwrite 00:25:05.132 21:40:27 -- host/digest.sh@80 -- # bs=131072 00:25:05.132 21:40:27 -- host/digest.sh@80 -- # qd=16 00:25:05.132 21:40:27 -- host/digest.sh@80 -- # scan_dsa=false 00:25:05.132 21:40:27 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:05.132 21:40:27 -- host/digest.sh@83 -- # bperfpid=2989192 00:25:05.132 21:40:27 -- host/digest.sh@84 -- # waitforlisten 2989192 /var/tmp/bperf.sock 00:25:05.132 21:40:27 -- common/autotest_common.sh@817 -- # '[' -z 2989192 ']' 00:25:05.132 21:40:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:05.132 21:40:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:05.132 21:40:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:05.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:05.132 21:40:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:05.132 21:40:27 -- common/autotest_common.sh@10 -- # set +x 00:25:05.132 [2024-04-24 21:40:27.883820] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:25:05.132 [2024-04-24 21:40:27.883874] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2989192 ] 00:25:05.132 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:05.132 Zero copy mechanism will not be used. 00:25:05.132 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.132 [2024-04-24 21:40:27.952231] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.132 [2024-04-24 21:40:28.018863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.066 21:40:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:06.066 21:40:28 -- common/autotest_common.sh@850 -- # return 0 00:25:06.066 21:40:28 -- host/digest.sh@86 -- # false 00:25:06.066 21:40:28 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:06.066 21:40:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:06.066 21:40:28 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:06.066 21:40:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:06.323 nvme0n1 00:25:06.323 21:40:29 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:06.323 21:40:29 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:06.581 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:06.581 Zero copy mechanism will not be used. 00:25:06.581 Running I/O for 2 seconds... 00:25:08.479 00:25:08.479 Latency(us) 00:25:08.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.479 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:08.479 nvme0n1 : 2.01 1748.80 218.60 0.00 0.00 9128.53 6868.17 35232.15 00:25:08.479 =================================================================================================================== 00:25:08.479 Total : 1748.80 218.60 0.00 0.00 9128.53 6868.17 35232.15 00:25:08.479 0 00:25:08.479 21:40:31 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:08.479 21:40:31 -- host/digest.sh@93 -- # get_accel_stats 00:25:08.479 21:40:31 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:08.479 21:40:31 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:08.479 | select(.opcode=="crc32c") 00:25:08.479 | "\(.module_name) \(.executed)"' 00:25:08.479 21:40:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:08.735 21:40:31 -- host/digest.sh@94 -- # false 00:25:08.735 21:40:31 -- host/digest.sh@94 -- # exp_module=software 00:25:08.735 21:40:31 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:08.735 21:40:31 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:08.735 21:40:31 -- host/digest.sh@98 -- # killprocess 2989192 00:25:08.735 21:40:31 -- common/autotest_common.sh@936 -- # '[' -z 2989192 ']' 00:25:08.735 21:40:31 -- common/autotest_common.sh@940 -- # kill -0 2989192 00:25:08.735 21:40:31 -- common/autotest_common.sh@941 -- # uname 00:25:08.735 21:40:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:08.735 21:40:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2989192 00:25:08.735 21:40:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:08.735 21:40:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:08.735 21:40:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2989192' 00:25:08.735 killing process with pid 2989192 00:25:08.735 21:40:31 -- common/autotest_common.sh@955 -- # kill 2989192 00:25:08.735 Received shutdown signal, test time was about 2.000000 seconds 00:25:08.735 00:25:08.735 Latency(us) 00:25:08.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.735 =================================================================================================================== 00:25:08.735 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:08.735 21:40:31 -- common/autotest_common.sh@960 -- # wait 2989192 00:25:08.991 21:40:31 -- host/digest.sh@132 -- # killprocess 2987008 00:25:08.991 21:40:31 -- common/autotest_common.sh@936 -- # '[' -z 2987008 ']' 00:25:08.991 21:40:31 -- common/autotest_common.sh@940 -- # kill -0 2987008 00:25:08.991 21:40:31 -- common/autotest_common.sh@941 -- # uname 00:25:08.991 21:40:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:08.991 21:40:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2987008 00:25:08.991 21:40:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:08.992 21:40:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:08.992 21:40:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2987008' 00:25:08.992 killing process with pid 2987008 00:25:08.992 21:40:31 -- common/autotest_common.sh@955 -- # kill 2987008 00:25:08.992 21:40:31 -- common/autotest_common.sh@960 -- # wait 2987008 00:25:09.248 00:25:09.248 real 0m16.884s 00:25:09.248 user 0m32.460s 00:25:09.248 sys 0m4.248s 00:25:09.248 21:40:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:09.248 21:40:31 -- common/autotest_common.sh@10 -- # set +x 00:25:09.248 ************************************ 00:25:09.248 END TEST nvmf_digest_clean 00:25:09.248 ************************************ 00:25:09.248 21:40:31 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:09.248 21:40:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:09.248 21:40:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:09.248 21:40:31 -- common/autotest_common.sh@10 -- # set +x 00:25:09.248 ************************************ 00:25:09.248 START TEST nvmf_digest_error 00:25:09.248 ************************************ 00:25:09.248 21:40:32 -- common/autotest_common.sh@1111 -- # run_digest_error 00:25:09.248 21:40:32 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:09.248 21:40:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:09.248 21:40:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:09.248 21:40:32 -- common/autotest_common.sh@10 -- # set +x 00:25:09.248 21:40:32 -- nvmf/common.sh@470 -- # nvmfpid=2989825 00:25:09.248 21:40:32 -- nvmf/common.sh@471 -- # waitforlisten 2989825 00:25:09.248 21:40:32 -- common/autotest_common.sh@817 -- # '[' -z 2989825 ']' 00:25:09.248 21:40:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.248 21:40:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:09.248 21:40:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.248 21:40:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:09.248 21:40:32 -- common/autotest_common.sh@10 -- # set +x 00:25:09.248 21:40:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:09.505 [2024-04-24 21:40:32.181155] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:25:09.505 [2024-04-24 21:40:32.181199] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:09.505 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.505 [2024-04-24 21:40:32.254872] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.505 [2024-04-24 21:40:32.326472] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:09.505 [2024-04-24 21:40:32.326508] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:09.505 [2024-04-24 21:40:32.326517] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:09.505 [2024-04-24 21:40:32.326526] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:09.505 [2024-04-24 21:40:32.326533] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:09.505 [2024-04-24 21:40:32.326558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.438 21:40:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:10.438 21:40:32 -- common/autotest_common.sh@850 -- # return 0 00:25:10.438 21:40:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:10.438 21:40:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:10.438 21:40:32 -- common/autotest_common.sh@10 -- # set +x 00:25:10.438 21:40:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:10.438 21:40:32 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:10.438 21:40:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.438 21:40:32 -- common/autotest_common.sh@10 -- # set +x 00:25:10.438 [2024-04-24 21:40:33.000549] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:10.438 21:40:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.438 21:40:33 -- host/digest.sh@105 -- # common_target_config 00:25:10.438 21:40:33 -- host/digest.sh@43 -- # rpc_cmd 00:25:10.438 21:40:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.438 21:40:33 -- common/autotest_common.sh@10 -- # set +x 00:25:10.438 null0 00:25:10.438 [2024-04-24 21:40:33.088146] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.438 [2024-04-24 21:40:33.112342] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.438 21:40:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.438 21:40:33 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:10.438 21:40:33 -- host/digest.sh@54 -- # local rw bs qd 00:25:10.438 21:40:33 -- host/digest.sh@56 -- # rw=randread 00:25:10.438 21:40:33 -- host/digest.sh@56 -- # bs=4096 00:25:10.438 21:40:33 -- host/digest.sh@56 -- # qd=128 00:25:10.438 21:40:33 -- host/digest.sh@58 -- # bperfpid=2990056 00:25:10.438 21:40:33 -- host/digest.sh@60 -- # waitforlisten 2990056 /var/tmp/bperf.sock 00:25:10.438 21:40:33 -- common/autotest_common.sh@817 -- # '[' -z 2990056 ']' 00:25:10.438 21:40:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:10.438 21:40:33 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:10.438 21:40:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:10.438 21:40:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:10.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:10.438 21:40:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:10.438 21:40:33 -- common/autotest_common.sh@10 -- # set +x 00:25:10.438 [2024-04-24 21:40:33.147361] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:25:10.439 [2024-04-24 21:40:33.147404] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2990056 ] 00:25:10.439 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.439 [2024-04-24 21:40:33.217959] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.439 [2024-04-24 21:40:33.292133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.371 21:40:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:11.371 21:40:33 -- common/autotest_common.sh@850 -- # return 0 00:25:11.371 21:40:33 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:11.371 21:40:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:11.371 21:40:34 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:11.371 21:40:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.371 21:40:34 -- common/autotest_common.sh@10 -- # set +x 00:25:11.371 21:40:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.371 21:40:34 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:11.371 21:40:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:11.629 nvme0n1 00:25:11.629 21:40:34 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:11.629 21:40:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.629 21:40:34 -- common/autotest_common.sh@10 -- # set +x 00:25:11.629 21:40:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.629 21:40:34 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:11.629 21:40:34 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:11.629 Running I/O for 2 seconds... 00:25:11.629 [2024-04-24 21:40:34.500334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.629 [2024-04-24 21:40:34.500369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.629 [2024-04-24 21:40:34.500382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.629 [2024-04-24 21:40:34.512691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.629 [2024-04-24 21:40:34.512720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.629 [2024-04-24 21:40:34.512733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.887 [2024-04-24 21:40:34.524244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.887 [2024-04-24 21:40:34.524271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.887 [2024-04-24 21:40:34.524286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.887 [2024-04-24 21:40:34.533678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.887 [2024-04-24 21:40:34.533701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.887 [2024-04-24 21:40:34.533712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.887 [2024-04-24 21:40:34.546977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.887 [2024-04-24 21:40:34.547000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.887 [2024-04-24 21:40:34.547012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.887 [2024-04-24 21:40:34.555276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.887 [2024-04-24 21:40:34.555298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.887 [2024-04-24 21:40:34.555309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.887 [2024-04-24 21:40:34.564549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.887 [2024-04-24 21:40:34.564571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.887 [2024-04-24 21:40:34.564582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.887 [2024-04-24 21:40:34.574106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.887 [2024-04-24 21:40:34.574127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.887 [2024-04-24 21:40:34.574138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.887 [2024-04-24 21:40:34.582825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.887 [2024-04-24 21:40:34.582846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.887 [2024-04-24 21:40:34.582857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.887 [2024-04-24 21:40:34.592148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.887 [2024-04-24 21:40:34.592169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.887 [2024-04-24 21:40:34.592179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.887 [2024-04-24 21:40:34.601268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.887 [2024-04-24 21:40:34.601288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.887 [2024-04-24 21:40:34.601299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.887 [2024-04-24 21:40:34.609557] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.887 [2024-04-24 21:40:34.609578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.887 [2024-04-24 21:40:34.609589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.887 [2024-04-24 21:40:34.619213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.887 [2024-04-24 21:40:34.619234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.887 [2024-04-24 21:40:34.619245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.887 [2024-04-24 21:40:34.629021] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.887 [2024-04-24 21:40:34.629042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.887 [2024-04-24 21:40:34.629053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.887 [2024-04-24 21:40:34.638536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.887 [2024-04-24 21:40:34.638557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.887 [2024-04-24 21:40:34.638568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.887 [2024-04-24 21:40:34.646835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.887 [2024-04-24 21:40:34.646855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.887 [2024-04-24 21:40:34.646866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.887 [2024-04-24 21:40:34.656295] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.887 [2024-04-24 21:40:34.656316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.887 [2024-04-24 21:40:34.656327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.887 [2024-04-24 21:40:34.665000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.887 [2024-04-24 21:40:34.665021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.887 [2024-04-24 21:40:34.665032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.887 [2024-04-24 21:40:34.673800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.887 [2024-04-24 21:40:34.673821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.887 [2024-04-24 21:40:34.673832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.887 [2024-04-24 21:40:34.683373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.887 [2024-04-24 21:40:34.683394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.888 [2024-04-24 21:40:34.683408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.888 [2024-04-24 21:40:34.692792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.888 [2024-04-24 21:40:34.692814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.888 [2024-04-24 21:40:34.692825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.888 [2024-04-24 21:40:34.701410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.888 [2024-04-24 21:40:34.701431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.888 [2024-04-24 21:40:34.701442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.888 [2024-04-24 21:40:34.711086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.888 [2024-04-24 21:40:34.711108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.888 [2024-04-24 21:40:34.711119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.888 [2024-04-24 21:40:34.719973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.888 [2024-04-24 21:40:34.719995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.888 [2024-04-24 21:40:34.720007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.888 [2024-04-24 21:40:34.729650] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.888 [2024-04-24 21:40:34.729672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.888 [2024-04-24 21:40:34.729683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.888 [2024-04-24 21:40:34.737580] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.888 [2024-04-24 21:40:34.737601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.888 [2024-04-24 21:40:34.737612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.888 [2024-04-24 21:40:34.750083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.888 [2024-04-24 21:40:34.750104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.888 [2024-04-24 21:40:34.750116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.888 [2024-04-24 21:40:34.758895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.888 [2024-04-24 21:40:34.758915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.888 [2024-04-24 21:40:34.758927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.888 [2024-04-24 21:40:34.768207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:11.888 [2024-04-24 21:40:34.768231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.888 [2024-04-24 21:40:34.768243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.145 [2024-04-24 21:40:34.776913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.145 [2024-04-24 21:40:34.776941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.145 [2024-04-24 21:40:34.776954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.145 [2024-04-24 21:40:34.787017] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.145 [2024-04-24 21:40:34.787040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.145 [2024-04-24 21:40:34.787052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.796391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.796412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.796424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.805851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.805872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.805883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.817500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.817522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.817533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.825900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.825920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.825931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.834802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.834822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.834833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.844372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.844393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.844404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.852759] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.852780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.852791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.862438] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.862464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.862475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.870738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.870759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.870770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.880433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.880461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.880472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.889595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.889616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.889628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.898143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.898164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.898175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.908051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.908073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.908084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.916897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.916918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.916930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.925230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.925251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.925265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.934273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.934295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.934306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.943262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.943283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.943294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.953864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.953886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.953899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.961515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.961536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.961548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.971543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.971563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.971574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.981076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.981097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.981108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.989720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.989740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.989751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:34.999013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:34.999034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:34.999045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:35.008478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:35.008502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:35.008513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:35.016392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:35.016413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:35.016424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.146 [2024-04-24 21:40:35.026423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.146 [2024-04-24 21:40:35.026444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.146 [2024-04-24 21:40:35.026460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.404 [2024-04-24 21:40:35.036313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.404 [2024-04-24 21:40:35.036337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.404 [2024-04-24 21:40:35.036349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.404 [2024-04-24 21:40:35.044900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.404 [2024-04-24 21:40:35.044923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.404 [2024-04-24 21:40:35.044934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.404 [2024-04-24 21:40:35.054106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.404 [2024-04-24 21:40:35.054127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.404 [2024-04-24 21:40:35.054139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.404 [2024-04-24 21:40:35.063122] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.404 [2024-04-24 21:40:35.063144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.404 [2024-04-24 21:40:35.063155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.404 [2024-04-24 21:40:35.072564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.404 [2024-04-24 21:40:35.072585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.404 [2024-04-24 21:40:35.072596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.404 [2024-04-24 21:40:35.080835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.404 [2024-04-24 21:40:35.080855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.404 [2024-04-24 21:40:35.080865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.404 [2024-04-24 21:40:35.090685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.404 [2024-04-24 21:40:35.090705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.404 [2024-04-24 21:40:35.090716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.404 [2024-04-24 21:40:35.099274] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.404 [2024-04-24 21:40:35.099294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.404 [2024-04-24 21:40:35.099305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.404 [2024-04-24 21:40:35.109021] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.404 [2024-04-24 21:40:35.109041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.404 [2024-04-24 21:40:35.109051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.404 [2024-04-24 21:40:35.117762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.404 [2024-04-24 21:40:35.117783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.404 [2024-04-24 21:40:35.117794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.404 [2024-04-24 21:40:35.126596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.404 [2024-04-24 21:40:35.126619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.404 [2024-04-24 21:40:35.126631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.405 [2024-04-24 21:40:35.137040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.405 [2024-04-24 21:40:35.137062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.405 [2024-04-24 21:40:35.137073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.405 [2024-04-24 21:40:35.144978] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.405 [2024-04-24 21:40:35.145000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.405 [2024-04-24 21:40:35.145010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.405 [2024-04-24 21:40:35.154341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.405 [2024-04-24 21:40:35.154364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.405 [2024-04-24 21:40:35.154374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.405 [2024-04-24 21:40:35.163606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.405 [2024-04-24 21:40:35.163628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.405 [2024-04-24 21:40:35.163642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.405 [2024-04-24 21:40:35.172742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.405 [2024-04-24 21:40:35.172763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.405 [2024-04-24 21:40:35.172773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.405 [2024-04-24 21:40:35.181836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.405 [2024-04-24 21:40:35.181858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.405 [2024-04-24 21:40:35.181868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.405 [2024-04-24 21:40:35.191338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.405 [2024-04-24 21:40:35.191360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.405 [2024-04-24 21:40:35.191371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.405 [2024-04-24 21:40:35.200144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.405 [2024-04-24 21:40:35.200166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.405 [2024-04-24 21:40:35.200176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.405 [2024-04-24 21:40:35.209291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.405 [2024-04-24 21:40:35.209313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.405 [2024-04-24 21:40:35.209324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.405 [2024-04-24 21:40:35.218340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.405 [2024-04-24 21:40:35.218361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.405 [2024-04-24 21:40:35.218372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.405 [2024-04-24 21:40:35.227131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.405 [2024-04-24 21:40:35.227152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.405 [2024-04-24 21:40:35.227163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.405 [2024-04-24 21:40:35.237286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.405 [2024-04-24 21:40:35.237309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.405 [2024-04-24 21:40:35.237320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.405 [2024-04-24 21:40:35.244904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.405 [2024-04-24 21:40:35.244926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.405 [2024-04-24 21:40:35.244936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.405 [2024-04-24 21:40:35.254252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.405 [2024-04-24 21:40:35.254274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.405 [2024-04-24 21:40:35.254284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.405 [2024-04-24 21:40:35.263139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.405 [2024-04-24 21:40:35.263161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.405 [2024-04-24 21:40:35.263172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.405 [2024-04-24 21:40:35.272409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.405 [2024-04-24 21:40:35.272432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.405 [2024-04-24 21:40:35.272442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.405 [2024-04-24 21:40:35.282492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.405 [2024-04-24 21:40:35.282514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.405 [2024-04-24 21:40:35.282525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.663 [2024-04-24 21:40:35.291355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.663 [2024-04-24 21:40:35.291381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.663 [2024-04-24 21:40:35.291394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.663 [2024-04-24 21:40:35.300405] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.663 [2024-04-24 21:40:35.300430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.663 [2024-04-24 21:40:35.300442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.663 [2024-04-24 21:40:35.309710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.663 [2024-04-24 21:40:35.309732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.663 [2024-04-24 21:40:35.309743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.663 [2024-04-24 21:40:35.318801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.663 [2024-04-24 21:40:35.318823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.663 [2024-04-24 21:40:35.318840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.663 [2024-04-24 21:40:35.327618] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.663 [2024-04-24 21:40:35.327641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.663 [2024-04-24 21:40:35.327652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.663 [2024-04-24 21:40:35.336971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.663 [2024-04-24 21:40:35.336994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.663 [2024-04-24 21:40:35.337005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.663 [2024-04-24 21:40:35.346233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.663 [2024-04-24 21:40:35.346255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.663 [2024-04-24 21:40:35.346265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.663 [2024-04-24 21:40:35.355879] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.663 [2024-04-24 21:40:35.355901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.663 [2024-04-24 21:40:35.355912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.663 [2024-04-24 21:40:35.364371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.663 [2024-04-24 21:40:35.364393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.663 [2024-04-24 21:40:35.364403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.663 [2024-04-24 21:40:35.373901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.663 [2024-04-24 21:40:35.373923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.663 [2024-04-24 21:40:35.373934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.663 [2024-04-24 21:40:35.382675] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.663 [2024-04-24 21:40:35.382698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.663 [2024-04-24 21:40:35.382709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.663 [2024-04-24 21:40:35.391872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.663 [2024-04-24 21:40:35.391894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.663 [2024-04-24 21:40:35.391904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.663 [2024-04-24 21:40:35.401983] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.663 [2024-04-24 21:40:35.402008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.663 [2024-04-24 21:40:35.402020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.663 [2024-04-24 21:40:35.410640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.663 [2024-04-24 21:40:35.410662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.663 [2024-04-24 21:40:35.410674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.663 [2024-04-24 21:40:35.420968] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.663 [2024-04-24 21:40:35.420990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.663 [2024-04-24 21:40:35.421001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.663 [2024-04-24 21:40:35.431061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.663 [2024-04-24 21:40:35.431084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.663 [2024-04-24 21:40:35.431095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.663 [2024-04-24 21:40:35.439273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.663 [2024-04-24 21:40:35.439295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.664 [2024-04-24 21:40:35.439306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.664 [2024-04-24 21:40:35.449553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.664 [2024-04-24 21:40:35.449575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.664 [2024-04-24 21:40:35.449587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.664 [2024-04-24 21:40:35.458026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.664 [2024-04-24 21:40:35.458048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.664 [2024-04-24 21:40:35.458059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.664 [2024-04-24 21:40:35.468098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.664 [2024-04-24 21:40:35.468121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.664 [2024-04-24 21:40:35.468131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.664 [2024-04-24 21:40:35.477436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.664 [2024-04-24 21:40:35.477465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.664 [2024-04-24 21:40:35.477476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.664 [2024-04-24 21:40:35.486115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.664 [2024-04-24 21:40:35.486137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.664 [2024-04-24 21:40:35.486148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.664 [2024-04-24 21:40:35.495634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.664 [2024-04-24 21:40:35.495656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.664 [2024-04-24 21:40:35.495667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.664 [2024-04-24 21:40:35.505398] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.664 [2024-04-24 21:40:35.505421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.664 [2024-04-24 21:40:35.505431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.664 [2024-04-24 21:40:35.514827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.664 [2024-04-24 21:40:35.514849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.664 [2024-04-24 21:40:35.514860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.664 [2024-04-24 21:40:35.522936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.664 [2024-04-24 21:40:35.522958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.664 [2024-04-24 21:40:35.522969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.664 [2024-04-24 21:40:35.532207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.664 [2024-04-24 21:40:35.532229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.664 [2024-04-24 21:40:35.532241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.664 [2024-04-24 21:40:35.541646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.664 [2024-04-24 21:40:35.541668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.664 [2024-04-24 21:40:35.541679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.925 [2024-04-24 21:40:35.550920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.925 [2024-04-24 21:40:35.550947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.925 [2024-04-24 21:40:35.550959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.925 [2024-04-24 21:40:35.560570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.925 [2024-04-24 21:40:35.560596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.925 [2024-04-24 21:40:35.560611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.925 [2024-04-24 21:40:35.568782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.925 [2024-04-24 21:40:35.568806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.925 [2024-04-24 21:40:35.568817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.925 [2024-04-24 21:40:35.578291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.925 [2024-04-24 21:40:35.578314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.925 [2024-04-24 21:40:35.578325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.925 [2024-04-24 21:40:35.587290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.925 [2024-04-24 21:40:35.587313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.925 [2024-04-24 21:40:35.587324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.925 [2024-04-24 21:40:35.597124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.925 [2024-04-24 21:40:35.597146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.925 [2024-04-24 21:40:35.597157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.605771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.605793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.605804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.615143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.615166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.615177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.623420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.623442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.623458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.633409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.633431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.633442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.643195] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.643220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.643230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.652219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.652241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.652252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.661128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.661150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.661161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.670335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.670357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.670368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.679878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.679900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.679911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.688552] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.688574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.688584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.698272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.698294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.698305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.706786] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.706809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.706819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.716203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.716225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.716236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.726008] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.726030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.726041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.733322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.733344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.733356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.743583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.743605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.743616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.752844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.752867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.752877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.762353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.762375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.762387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.772031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.772053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.772064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.780809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.780830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.780841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.790901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.790923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.790934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:12.926 [2024-04-24 21:40:35.799254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:12.926 [2024-04-24 21:40:35.799276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.926 [2024-04-24 21:40:35.799291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.201 [2024-04-24 21:40:35.809728] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.201 [2024-04-24 21:40:35.809754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.201 [2024-04-24 21:40:35.809766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.201 [2024-04-24 21:40:35.819522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.201 [2024-04-24 21:40:35.819549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.201 [2024-04-24 21:40:35.819561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.201 [2024-04-24 21:40:35.830292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:35.830316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:35.830328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:35.839729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:35.839752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:35.839764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:35.848352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:35.848374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:35.848385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:35.858620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:35.858643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:35.858654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:35.868068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:35.868090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:35.868100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:35.876322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:35.876343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:35.876354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:35.885855] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:35.885877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:35.885887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:35.894879] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:35.894902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:35.894913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:35.904486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:35.904508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:35.904519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:35.913419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:35.913440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:35.913455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:35.922845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:35.922868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:35.922879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:35.931758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:35.931780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:35.931791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:35.940238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:35.940261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:35.940272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:35.950516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:35.950538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:35.950548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:35.959356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:35.959378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:35.959392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:35.968519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:35.968541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:35.968552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:35.977669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:35.977691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:35.977702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:35.987004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:35.987026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:35.987037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:35.995943] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:35.995965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:35.995976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:36.005600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:36.005623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:36.005634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:36.014377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:36.014400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:36.014410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:36.023232] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:36.023254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:36.023264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:36.032661] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:36.032682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:36.032692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:36.040787] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:36.040811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:36.040822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:36.050780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:36.050802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:36.050812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:36.059220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:36.059242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:36.059253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:36.068401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.202 [2024-04-24 21:40:36.068422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.202 [2024-04-24 21:40:36.068433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.202 [2024-04-24 21:40:36.077462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.203 [2024-04-24 21:40:36.077485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.203 [2024-04-24 21:40:36.077496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.203 [2024-04-24 21:40:36.087155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.203 [2024-04-24 21:40:36.087186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.203 [2024-04-24 21:40:36.087202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.096212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.096238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.096250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.105030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.105053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.105064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.115370] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.115393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.115404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.124176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.124198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.124209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.133359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.133381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.133392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.141881] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.141904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.141915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.151763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.151785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.151796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.159719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.159741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.159752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.169804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.169826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.169836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.179301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.179323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.179334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.188674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.188696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.188706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.197537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.197559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.197573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.206805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.206826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.206837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.215235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.215257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.215268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.224124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.224147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.224157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.233964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.233986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.233996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.242916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.242938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.242948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.251734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.251756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.251768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.260357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.260379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.260390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.269361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.269383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.269394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.279594] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.279620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.279631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.292180] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.292202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.292212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.303631] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.303652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.303663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.313202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.313223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.313233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.325050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.325072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.460 [2024-04-24 21:40:36.325082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.460 [2024-04-24 21:40:36.335140] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.460 [2024-04-24 21:40:36.335162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.461 [2024-04-24 21:40:36.335172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.461 [2024-04-24 21:40:36.344412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.461 [2024-04-24 21:40:36.344437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.461 [2024-04-24 21:40:36.344448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.716 [2024-04-24 21:40:36.356688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.716 [2024-04-24 21:40:36.356714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.716 [2024-04-24 21:40:36.356725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.716 [2024-04-24 21:40:36.366364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.716 [2024-04-24 21:40:36.366387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.716 [2024-04-24 21:40:36.366398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.716 [2024-04-24 21:40:36.377913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.716 [2024-04-24 21:40:36.377936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.716 [2024-04-24 21:40:36.377946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.716 [2024-04-24 21:40:36.385955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.716 [2024-04-24 21:40:36.385977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.716 [2024-04-24 21:40:36.385987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.716 [2024-04-24 21:40:36.396261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.716 [2024-04-24 21:40:36.396283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.716 [2024-04-24 21:40:36.396293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.716 [2024-04-24 21:40:36.409661] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.716 [2024-04-24 21:40:36.409683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.716 [2024-04-24 21:40:36.409694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.716 [2024-04-24 21:40:36.419386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.716 [2024-04-24 21:40:36.419407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.716 [2024-04-24 21:40:36.419417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.716 [2024-04-24 21:40:36.429907] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.716 [2024-04-24 21:40:36.429929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.716 [2024-04-24 21:40:36.429941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.716 [2024-04-24 21:40:36.440242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.717 [2024-04-24 21:40:36.440265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.717 [2024-04-24 21:40:36.440275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.717 [2024-04-24 21:40:36.448156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.717 [2024-04-24 21:40:36.448179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.717 [2024-04-24 21:40:36.448190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.717 [2024-04-24 21:40:36.458166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.717 [2024-04-24 21:40:36.458188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.717 [2024-04-24 21:40:36.458202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.717 [2024-04-24 21:40:36.470440] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb7fcf0) 00:25:13.717 [2024-04-24 21:40:36.470467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.717 [2024-04-24 21:40:36.470478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.717 00:25:13.717 Latency(us) 00:25:13.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.717 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:13.717 nvme0n1 : 2.00 26975.10 105.37 0.00 0.00 4740.41 2202.01 23383.24 00:25:13.717 =================================================================================================================== 00:25:13.717 Total : 26975.10 105.37 0.00 0.00 4740.41 2202.01 23383.24 00:25:13.717 0 00:25:13.717 21:40:36 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:13.717 21:40:36 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:13.717 21:40:36 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:13.717 | .driver_specific 00:25:13.717 | .nvme_error 00:25:13.717 | .status_code 00:25:13.717 | .command_transient_transport_error' 00:25:13.717 21:40:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:13.974 21:40:36 -- host/digest.sh@71 -- # (( 211 > 0 )) 00:25:13.974 21:40:36 -- host/digest.sh@73 -- # killprocess 2990056 00:25:13.974 21:40:36 -- common/autotest_common.sh@936 -- # '[' -z 2990056 ']' 00:25:13.974 21:40:36 -- common/autotest_common.sh@940 -- # kill -0 2990056 00:25:13.974 21:40:36 -- common/autotest_common.sh@941 -- # uname 00:25:13.974 21:40:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:13.974 21:40:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2990056 00:25:13.974 21:40:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:13.974 21:40:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:13.974 21:40:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2990056' 00:25:13.974 killing process with pid 2990056 00:25:13.974 21:40:36 -- common/autotest_common.sh@955 -- # kill 2990056 00:25:13.974 Received shutdown signal, test time was about 2.000000 seconds 00:25:13.974 00:25:13.974 Latency(us) 00:25:13.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.974 =================================================================================================================== 00:25:13.974 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:13.974 21:40:36 -- common/autotest_common.sh@960 -- # wait 2990056 00:25:14.230 21:40:36 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:14.230 21:40:36 -- host/digest.sh@54 -- # local rw bs qd 00:25:14.230 21:40:36 -- host/digest.sh@56 -- # rw=randread 00:25:14.230 21:40:36 -- host/digest.sh@56 -- # bs=131072 00:25:14.230 21:40:36 -- host/digest.sh@56 -- # qd=16 00:25:14.230 21:40:36 -- host/digest.sh@58 -- # bperfpid=2990646 00:25:14.230 21:40:36 -- host/digest.sh@60 -- # waitforlisten 2990646 /var/tmp/bperf.sock 00:25:14.230 21:40:36 -- common/autotest_common.sh@817 -- # '[' -z 2990646 ']' 00:25:14.230 21:40:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:14.230 21:40:36 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:14.230 21:40:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:14.230 21:40:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:14.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:14.230 21:40:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:14.230 21:40:36 -- common/autotest_common.sh@10 -- # set +x 00:25:14.230 [2024-04-24 21:40:36.954641] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:25:14.230 [2024-04-24 21:40:36.954692] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2990646 ] 00:25:14.231 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:14.231 Zero copy mechanism will not be used. 00:25:14.231 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.231 [2024-04-24 21:40:37.024660] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.231 [2024-04-24 21:40:37.100168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.159 21:40:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:15.159 21:40:37 -- common/autotest_common.sh@850 -- # return 0 00:25:15.159 21:40:37 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:15.159 21:40:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:15.159 21:40:37 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:15.159 21:40:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.159 21:40:37 -- common/autotest_common.sh@10 -- # set +x 00:25:15.159 21:40:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.159 21:40:37 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:15.159 21:40:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:15.415 nvme0n1 00:25:15.415 21:40:38 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:15.415 21:40:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.415 21:40:38 -- common/autotest_common.sh@10 -- # set +x 00:25:15.415 21:40:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.415 21:40:38 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:15.415 21:40:38 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:15.672 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:15.672 Zero copy mechanism will not be used. 00:25:15.672 Running I/O for 2 seconds... 00:25:15.672 [2024-04-24 21:40:38.350601] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.672 [2024-04-24 21:40:38.350635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.672 [2024-04-24 21:40:38.350648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.672 [2024-04-24 21:40:38.364376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.672 [2024-04-24 21:40:38.364402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.672 [2024-04-24 21:40:38.364414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.672 [2024-04-24 21:40:38.375717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.672 [2024-04-24 21:40:38.375740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.672 [2024-04-24 21:40:38.375751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.672 [2024-04-24 21:40:38.386902] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.672 [2024-04-24 21:40:38.386931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.672 [2024-04-24 21:40:38.386941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.672 [2024-04-24 21:40:38.398145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.672 [2024-04-24 21:40:38.398167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.672 [2024-04-24 21:40:38.398177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.672 [2024-04-24 21:40:38.409297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.672 [2024-04-24 21:40:38.409319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.672 [2024-04-24 21:40:38.409330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.672 [2024-04-24 21:40:38.420575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.672 [2024-04-24 21:40:38.420597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.672 [2024-04-24 21:40:38.420609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.672 [2024-04-24 21:40:38.431906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.672 [2024-04-24 21:40:38.431927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.672 [2024-04-24 21:40:38.431939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.672 [2024-04-24 21:40:38.443226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.672 [2024-04-24 21:40:38.443248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.672 [2024-04-24 21:40:38.443258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.672 [2024-04-24 21:40:38.454589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.672 [2024-04-24 21:40:38.454611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.672 [2024-04-24 21:40:38.454621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.672 [2024-04-24 21:40:38.465927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.672 [2024-04-24 21:40:38.465949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.672 [2024-04-24 21:40:38.465959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.672 [2024-04-24 21:40:38.477516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.672 [2024-04-24 21:40:38.477539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.672 [2024-04-24 21:40:38.477553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.672 [2024-04-24 21:40:38.488798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.672 [2024-04-24 21:40:38.488820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.672 [2024-04-24 21:40:38.488831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.672 [2024-04-24 21:40:38.500151] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.672 [2024-04-24 21:40:38.500173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.672 [2024-04-24 21:40:38.500184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.672 [2024-04-24 21:40:38.511518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.672 [2024-04-24 21:40:38.511538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.672 [2024-04-24 21:40:38.511549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.672 [2024-04-24 21:40:38.522804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.672 [2024-04-24 21:40:38.522825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.672 [2024-04-24 21:40:38.522836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.672 [2024-04-24 21:40:38.534110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.672 [2024-04-24 21:40:38.534131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.672 [2024-04-24 21:40:38.534142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.672 [2024-04-24 21:40:38.545386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.672 [2024-04-24 21:40:38.545407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.672 [2024-04-24 21:40:38.545417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.672 [2024-04-24 21:40:38.556682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.672 [2024-04-24 21:40:38.556707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.672 [2024-04-24 21:40:38.556719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.928 [2024-04-24 21:40:38.568055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.928 [2024-04-24 21:40:38.568080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.928 [2024-04-24 21:40:38.568092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.928 [2024-04-24 21:40:38.579375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.928 [2024-04-24 21:40:38.579402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.928 [2024-04-24 21:40:38.579414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.928 [2024-04-24 21:40:38.590642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.928 [2024-04-24 21:40:38.590664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.928 [2024-04-24 21:40:38.590675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.928 [2024-04-24 21:40:38.601943] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.928 [2024-04-24 21:40:38.601965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.928 [2024-04-24 21:40:38.601976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.928 [2024-04-24 21:40:38.613246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.928 [2024-04-24 21:40:38.613268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.928 [2024-04-24 21:40:38.613279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.928 [2024-04-24 21:40:38.624546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.928 [2024-04-24 21:40:38.624567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.928 [2024-04-24 21:40:38.624578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.928 [2024-04-24 21:40:38.635774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.928 [2024-04-24 21:40:38.635795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.928 [2024-04-24 21:40:38.635806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.928 [2024-04-24 21:40:38.646994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.928 [2024-04-24 21:40:38.647015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.928 [2024-04-24 21:40:38.647026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.928 [2024-04-24 21:40:38.658323] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.928 [2024-04-24 21:40:38.658345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.929 [2024-04-24 21:40:38.658355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.929 [2024-04-24 21:40:38.669558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.929 [2024-04-24 21:40:38.669579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.929 [2024-04-24 21:40:38.669589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.929 [2024-04-24 21:40:38.680795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.929 [2024-04-24 21:40:38.680816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.929 [2024-04-24 21:40:38.680827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.929 [2024-04-24 21:40:38.692116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.929 [2024-04-24 21:40:38.692138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.929 [2024-04-24 21:40:38.692148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.929 [2024-04-24 21:40:38.703404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.929 [2024-04-24 21:40:38.703425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.929 [2024-04-24 21:40:38.703436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.929 [2024-04-24 21:40:38.714699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.929 [2024-04-24 21:40:38.714720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.929 [2024-04-24 21:40:38.714730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.929 [2024-04-24 21:40:38.725974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.929 [2024-04-24 21:40:38.725996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.929 [2024-04-24 21:40:38.726006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.929 [2024-04-24 21:40:38.737194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.929 [2024-04-24 21:40:38.737215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.929 [2024-04-24 21:40:38.737226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.929 [2024-04-24 21:40:38.748416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.929 [2024-04-24 21:40:38.748437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.929 [2024-04-24 21:40:38.748447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.929 [2024-04-24 21:40:38.759627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.929 [2024-04-24 21:40:38.759648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.929 [2024-04-24 21:40:38.759658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.929 [2024-04-24 21:40:38.770843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.929 [2024-04-24 21:40:38.770865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.929 [2024-04-24 21:40:38.770878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.929 [2024-04-24 21:40:38.782074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.929 [2024-04-24 21:40:38.782095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.929 [2024-04-24 21:40:38.782106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.929 [2024-04-24 21:40:38.793303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.929 [2024-04-24 21:40:38.793324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.929 [2024-04-24 21:40:38.793334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.929 [2024-04-24 21:40:38.804533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:15.929 [2024-04-24 21:40:38.804554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.929 [2024-04-24 21:40:38.804564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.186 [2024-04-24 21:40:38.815799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.186 [2024-04-24 21:40:38.815825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.186 [2024-04-24 21:40:38.815836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.186 [2024-04-24 21:40:38.827046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.186 [2024-04-24 21:40:38.827070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.186 [2024-04-24 21:40:38.827081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.186 [2024-04-24 21:40:38.838263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.186 [2024-04-24 21:40:38.838285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.186 [2024-04-24 21:40:38.838296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.186 [2024-04-24 21:40:38.849575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.186 [2024-04-24 21:40:38.849597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.186 [2024-04-24 21:40:38.849607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.186 [2024-04-24 21:40:38.860893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.186 [2024-04-24 21:40:38.860916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.186 [2024-04-24 21:40:38.860926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.186 [2024-04-24 21:40:38.872110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.186 [2024-04-24 21:40:38.872132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.186 [2024-04-24 21:40:38.872143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.186 [2024-04-24 21:40:38.883562] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.186 [2024-04-24 21:40:38.883584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.186 [2024-04-24 21:40:38.883595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.186 [2024-04-24 21:40:38.894813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.186 [2024-04-24 21:40:38.894834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.186 [2024-04-24 21:40:38.894845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.186 [2024-04-24 21:40:38.906136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.186 [2024-04-24 21:40:38.906157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.186 [2024-04-24 21:40:38.906167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.186 [2024-04-24 21:40:38.917363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.186 [2024-04-24 21:40:38.917383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.186 [2024-04-24 21:40:38.917394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.186 [2024-04-24 21:40:38.928615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.186 [2024-04-24 21:40:38.928636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.186 [2024-04-24 21:40:38.928646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.186 [2024-04-24 21:40:38.939857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.186 [2024-04-24 21:40:38.939879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.186 [2024-04-24 21:40:38.939890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.186 [2024-04-24 21:40:38.951543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.186 [2024-04-24 21:40:38.951565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.186 [2024-04-24 21:40:38.951576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.186 [2024-04-24 21:40:38.962852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.186 [2024-04-24 21:40:38.962874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.186 [2024-04-24 21:40:38.962888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.186 [2024-04-24 21:40:38.974195] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.186 [2024-04-24 21:40:38.974218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.186 [2024-04-24 21:40:38.974228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.186 [2024-04-24 21:40:38.985457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.186 [2024-04-24 21:40:38.985479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.186 [2024-04-24 21:40:38.985489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.186 [2024-04-24 21:40:38.996705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.186 [2024-04-24 21:40:38.996727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.186 [2024-04-24 21:40:38.996737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.186 [2024-04-24 21:40:39.007964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.186 [2024-04-24 21:40:39.007985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.187 [2024-04-24 21:40:39.007995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.187 [2024-04-24 21:40:39.019861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.187 [2024-04-24 21:40:39.019882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.187 [2024-04-24 21:40:39.019893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.187 [2024-04-24 21:40:39.031124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.187 [2024-04-24 21:40:39.031146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.187 [2024-04-24 21:40:39.031156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.187 [2024-04-24 21:40:39.042346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.187 [2024-04-24 21:40:39.042367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.187 [2024-04-24 21:40:39.042378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.187 [2024-04-24 21:40:39.053590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.187 [2024-04-24 21:40:39.053611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.187 [2024-04-24 21:40:39.053621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.187 [2024-04-24 21:40:39.064866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.187 [2024-04-24 21:40:39.064891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.187 [2024-04-24 21:40:39.064902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.444 [2024-04-24 21:40:39.077007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.444 [2024-04-24 21:40:39.077032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.444 [2024-04-24 21:40:39.077044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.444 [2024-04-24 21:40:39.088502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.444 [2024-04-24 21:40:39.088525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.444 [2024-04-24 21:40:39.088536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.444 [2024-04-24 21:40:39.099935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.445 [2024-04-24 21:40:39.099956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.445 [2024-04-24 21:40:39.099967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.445 [2024-04-24 21:40:39.112149] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.445 [2024-04-24 21:40:39.112171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.445 [2024-04-24 21:40:39.112181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.445 [2024-04-24 21:40:39.124124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.445 [2024-04-24 21:40:39.124146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.445 [2024-04-24 21:40:39.124156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.445 [2024-04-24 21:40:39.135927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.445 [2024-04-24 21:40:39.135949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.445 [2024-04-24 21:40:39.135959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.445 [2024-04-24 21:40:39.147865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.445 [2024-04-24 21:40:39.147887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.445 [2024-04-24 21:40:39.147897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.445 [2024-04-24 21:40:39.158836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.445 [2024-04-24 21:40:39.158860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.445 [2024-04-24 21:40:39.158870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.445 [2024-04-24 21:40:39.169810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.445 [2024-04-24 21:40:39.169832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.445 [2024-04-24 21:40:39.169842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.445 [2024-04-24 21:40:39.180922] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.445 [2024-04-24 21:40:39.180944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.445 [2024-04-24 21:40:39.180954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.445 [2024-04-24 21:40:39.192446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.445 [2024-04-24 21:40:39.192472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.445 [2024-04-24 21:40:39.192482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.445 [2024-04-24 21:40:39.209455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.445 [2024-04-24 21:40:39.209476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.445 [2024-04-24 21:40:39.209486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.445 [2024-04-24 21:40:39.224481] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.445 [2024-04-24 21:40:39.224503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.445 [2024-04-24 21:40:39.224513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.445 [2024-04-24 21:40:39.237072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.445 [2024-04-24 21:40:39.237093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.445 [2024-04-24 21:40:39.237104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.445 [2024-04-24 21:40:39.249172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.445 [2024-04-24 21:40:39.249194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.445 [2024-04-24 21:40:39.249205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.445 [2024-04-24 21:40:39.261547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.445 [2024-04-24 21:40:39.261568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.445 [2024-04-24 21:40:39.261578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.445 [2024-04-24 21:40:39.273310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.445 [2024-04-24 21:40:39.273331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.445 [2024-04-24 21:40:39.273344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.445 [2024-04-24 21:40:39.290707] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.445 [2024-04-24 21:40:39.290728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.445 [2024-04-24 21:40:39.290738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.445 [2024-04-24 21:40:39.311743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.445 [2024-04-24 21:40:39.311764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.445 [2024-04-24 21:40:39.311774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.445 [2024-04-24 21:40:39.328143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.445 [2024-04-24 21:40:39.328168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.445 [2024-04-24 21:40:39.328179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.702 [2024-04-24 21:40:39.342000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.702 [2024-04-24 21:40:39.342025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-04-24 21:40:39.342036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.702 [2024-04-24 21:40:39.355579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.702 [2024-04-24 21:40:39.355602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-04-24 21:40:39.355613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.702 [2024-04-24 21:40:39.375258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.702 [2024-04-24 21:40:39.375280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-04-24 21:40:39.375291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.702 [2024-04-24 21:40:39.391721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.702 [2024-04-24 21:40:39.391743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-04-24 21:40:39.391754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.702 [2024-04-24 21:40:39.412061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.702 [2024-04-24 21:40:39.412085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-04-24 21:40:39.412097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.702 [2024-04-24 21:40:39.433176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.702 [2024-04-24 21:40:39.433198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-04-24 21:40:39.433208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.702 [2024-04-24 21:40:39.449042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.702 [2024-04-24 21:40:39.449064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-04-24 21:40:39.449075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.702 [2024-04-24 21:40:39.460789] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.702 [2024-04-24 21:40:39.460811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-04-24 21:40:39.460822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.702 [2024-04-24 21:40:39.472145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.702 [2024-04-24 21:40:39.472166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-04-24 21:40:39.472176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.702 [2024-04-24 21:40:39.483436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.702 [2024-04-24 21:40:39.483463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-04-24 21:40:39.483473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.702 [2024-04-24 21:40:39.494789] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.702 [2024-04-24 21:40:39.494811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-04-24 21:40:39.494821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.702 [2024-04-24 21:40:39.506099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.702 [2024-04-24 21:40:39.506120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-04-24 21:40:39.506130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.702 [2024-04-24 21:40:39.517646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.702 [2024-04-24 21:40:39.517667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-04-24 21:40:39.517677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.702 [2024-04-24 21:40:39.528962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.702 [2024-04-24 21:40:39.528983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-04-24 21:40:39.528996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.702 [2024-04-24 21:40:39.540220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.702 [2024-04-24 21:40:39.540241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-04-24 21:40:39.540252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.702 [2024-04-24 21:40:39.551352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.702 [2024-04-24 21:40:39.551372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-04-24 21:40:39.551383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.702 [2024-04-24 21:40:39.562542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.702 [2024-04-24 21:40:39.562563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-04-24 21:40:39.562574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.702 [2024-04-24 21:40:39.573663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.702 [2024-04-24 21:40:39.573683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-04-24 21:40:39.573694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.702 [2024-04-24 21:40:39.584944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.702 [2024-04-24 21:40:39.584965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-04-24 21:40:39.584975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.596126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.596150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.596161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.607296] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.607318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.607329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.618559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.618581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.618591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.629744] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.629769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.629780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.641001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.641022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.641032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.652224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.652245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.652255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.663376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.663397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.663407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.674538] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.674559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.674569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.685634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.685655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.685665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.696758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.696779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.696789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.707907] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.707928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.707938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.719138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.719160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.719171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.730315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.730336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.730347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.741421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.741442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.741459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.752628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.752648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.752658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.763722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.763744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.763754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.774776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.774797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.774807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.785908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.785932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.785942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.797048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.797068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.797078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.808103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.808123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.808133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.819573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.819594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.819608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.830825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.830846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.830857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.959 [2024-04-24 21:40:39.842072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:16.959 [2024-04-24 21:40:39.842093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.959 [2024-04-24 21:40:39.842103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.216 [2024-04-24 21:40:39.853392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.216 [2024-04-24 21:40:39.853416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.216 [2024-04-24 21:40:39.853427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.216 [2024-04-24 21:40:39.864672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.216 [2024-04-24 21:40:39.864694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.216 [2024-04-24 21:40:39.864705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.216 [2024-04-24 21:40:39.875941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.216 [2024-04-24 21:40:39.875963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.216 [2024-04-24 21:40:39.875973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.216 [2024-04-24 21:40:39.887089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.216 [2024-04-24 21:40:39.887110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.216 [2024-04-24 21:40:39.887120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.216 [2024-04-24 21:40:39.898341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.216 [2024-04-24 21:40:39.898363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.216 [2024-04-24 21:40:39.898373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.216 [2024-04-24 21:40:39.909580] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.216 [2024-04-24 21:40:39.909600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.216 [2024-04-24 21:40:39.909611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.217 [2024-04-24 21:40:39.920826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.217 [2024-04-24 21:40:39.920848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.217 [2024-04-24 21:40:39.920858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.217 [2024-04-24 21:40:39.932119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.217 [2024-04-24 21:40:39.932140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.217 [2024-04-24 21:40:39.932151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.217 [2024-04-24 21:40:39.943276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.217 [2024-04-24 21:40:39.943297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.217 [2024-04-24 21:40:39.943307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.217 [2024-04-24 21:40:39.954483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.217 [2024-04-24 21:40:39.954505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.217 [2024-04-24 21:40:39.954516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.217 [2024-04-24 21:40:39.965746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.217 [2024-04-24 21:40:39.965767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.217 [2024-04-24 21:40:39.965778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.217 [2024-04-24 21:40:39.976823] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.217 [2024-04-24 21:40:39.976844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.217 [2024-04-24 21:40:39.976854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.217 [2024-04-24 21:40:39.987877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.217 [2024-04-24 21:40:39.987898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.217 [2024-04-24 21:40:39.987909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.217 [2024-04-24 21:40:39.999040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.217 [2024-04-24 21:40:39.999061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.217 [2024-04-24 21:40:39.999071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.217 [2024-04-24 21:40:40.010534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.217 [2024-04-24 21:40:40.010558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.217 [2024-04-24 21:40:40.010573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.217 [2024-04-24 21:40:40.021899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.217 [2024-04-24 21:40:40.021921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.217 [2024-04-24 21:40:40.021933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.217 [2024-04-24 21:40:40.034023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.217 [2024-04-24 21:40:40.034047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.217 [2024-04-24 21:40:40.034059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.217 [2024-04-24 21:40:40.045408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.217 [2024-04-24 21:40:40.045432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.217 [2024-04-24 21:40:40.045443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.217 [2024-04-24 21:40:40.056971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.217 [2024-04-24 21:40:40.056994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.217 [2024-04-24 21:40:40.057005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.217 [2024-04-24 21:40:40.068369] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.217 [2024-04-24 21:40:40.068391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.217 [2024-04-24 21:40:40.068402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.217 [2024-04-24 21:40:40.079754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.217 [2024-04-24 21:40:40.079777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.217 [2024-04-24 21:40:40.079788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.217 [2024-04-24 21:40:40.091272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.217 [2024-04-24 21:40:40.091294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.217 [2024-04-24 21:40:40.091305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.217 [2024-04-24 21:40:40.102997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.217 [2024-04-24 21:40:40.103024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.217 [2024-04-24 21:40:40.103036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.474 [2024-04-24 21:40:40.114872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.474 [2024-04-24 21:40:40.114902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.474 [2024-04-24 21:40:40.114914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.474 [2024-04-24 21:40:40.126379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.474 [2024-04-24 21:40:40.126403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.474 [2024-04-24 21:40:40.126414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.474 [2024-04-24 21:40:40.137877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.474 [2024-04-24 21:40:40.137898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.474 [2024-04-24 21:40:40.137909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.474 [2024-04-24 21:40:40.149318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.474 [2024-04-24 21:40:40.149340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.474 [2024-04-24 21:40:40.149350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.474 [2024-04-24 21:40:40.161359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.474 [2024-04-24 21:40:40.161381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.474 [2024-04-24 21:40:40.161392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.474 [2024-04-24 21:40:40.172806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.474 [2024-04-24 21:40:40.172829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.474 [2024-04-24 21:40:40.172839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.474 [2024-04-24 21:40:40.184232] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.474 [2024-04-24 21:40:40.184253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.474 [2024-04-24 21:40:40.184264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.474 [2024-04-24 21:40:40.195741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.474 [2024-04-24 21:40:40.195762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.474 [2024-04-24 21:40:40.195773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.474 [2024-04-24 21:40:40.207134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.474 [2024-04-24 21:40:40.207155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.474 [2024-04-24 21:40:40.207166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.474 [2024-04-24 21:40:40.218532] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.474 [2024-04-24 21:40:40.218554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.474 [2024-04-24 21:40:40.218564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.474 [2024-04-24 21:40:40.229956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.474 [2024-04-24 21:40:40.229978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.474 [2024-04-24 21:40:40.229988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.474 [2024-04-24 21:40:40.241605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.474 [2024-04-24 21:40:40.241626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.474 [2024-04-24 21:40:40.241636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.474 [2024-04-24 21:40:40.253028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.474 [2024-04-24 21:40:40.253050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.474 [2024-04-24 21:40:40.253060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.474 [2024-04-24 21:40:40.264338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.474 [2024-04-24 21:40:40.264359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.474 [2024-04-24 21:40:40.264370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.474 [2024-04-24 21:40:40.275682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.474 [2024-04-24 21:40:40.275704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.474 [2024-04-24 21:40:40.275715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.474 [2024-04-24 21:40:40.287006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.474 [2024-04-24 21:40:40.287027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.474 [2024-04-24 21:40:40.287037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:17.474 [2024-04-24 21:40:40.298424] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.474 [2024-04-24 21:40:40.298445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.474 [2024-04-24 21:40:40.298461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:17.474 [2024-04-24 21:40:40.309734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.474 [2024-04-24 21:40:40.309755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.474 [2024-04-24 21:40:40.309769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:17.474 [2024-04-24 21:40:40.321077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ade0) 00:25:17.474 [2024-04-24 21:40:40.321098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.474 [2024-04-24 21:40:40.321109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:17.474 00:25:17.474 Latency(us) 00:25:17.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.474 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:17.474 nvme0n1 : 2.01 2605.38 325.67 0.00 0.00 6138.46 5347.74 24222.11 00:25:17.474 =================================================================================================================== 00:25:17.474 Total : 2605.38 325.67 0.00 0.00 6138.46 5347.74 24222.11 00:25:17.474 0 00:25:17.475 21:40:40 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:17.475 21:40:40 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:17.475 21:40:40 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:17.475 | .driver_specific 00:25:17.475 | .nvme_error 00:25:17.475 | .status_code 00:25:17.475 | .command_transient_transport_error' 00:25:17.475 21:40:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:17.732 21:40:40 -- host/digest.sh@71 -- # (( 168 > 0 )) 00:25:17.732 21:40:40 -- host/digest.sh@73 -- # killprocess 2990646 00:25:17.732 21:40:40 -- common/autotest_common.sh@936 -- # '[' -z 2990646 ']' 00:25:17.732 21:40:40 -- common/autotest_common.sh@940 -- # kill -0 2990646 00:25:17.732 21:40:40 -- common/autotest_common.sh@941 -- # uname 00:25:17.732 21:40:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:17.732 21:40:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2990646 00:25:17.732 21:40:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:17.732 21:40:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:17.732 21:40:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2990646' 00:25:17.732 killing process with pid 2990646 00:25:17.732 21:40:40 -- common/autotest_common.sh@955 -- # kill 2990646 00:25:17.732 Received shutdown signal, test time was about 2.000000 seconds 00:25:17.732 00:25:17.732 Latency(us) 00:25:17.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.732 =================================================================================================================== 00:25:17.732 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:17.732 21:40:40 -- common/autotest_common.sh@960 -- # wait 2990646 00:25:17.988 21:40:40 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:17.988 21:40:40 -- host/digest.sh@54 -- # local rw bs qd 00:25:17.988 21:40:40 -- host/digest.sh@56 -- # rw=randwrite 00:25:17.988 21:40:40 -- host/digest.sh@56 -- # bs=4096 00:25:17.988 21:40:40 -- host/digest.sh@56 -- # qd=128 00:25:17.988 21:40:40 -- host/digest.sh@58 -- # bperfpid=2991406 00:25:17.988 21:40:40 -- host/digest.sh@60 -- # waitforlisten 2991406 /var/tmp/bperf.sock 00:25:17.988 21:40:40 -- common/autotest_common.sh@817 -- # '[' -z 2991406 ']' 00:25:17.988 21:40:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:17.988 21:40:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:17.988 21:40:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:17.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:17.988 21:40:40 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:17.988 21:40:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:17.988 21:40:40 -- common/autotest_common.sh@10 -- # set +x 00:25:17.988 [2024-04-24 21:40:40.788056] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:25:17.988 [2024-04-24 21:40:40.788109] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2991406 ] 00:25:17.988 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.988 [2024-04-24 21:40:40.857882] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.244 [2024-04-24 21:40:40.925969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.808 21:40:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:18.808 21:40:41 -- common/autotest_common.sh@850 -- # return 0 00:25:18.808 21:40:41 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:18.808 21:40:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:19.064 21:40:41 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:19.064 21:40:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.064 21:40:41 -- common/autotest_common.sh@10 -- # set +x 00:25:19.064 21:40:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.064 21:40:41 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:19.064 21:40:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:19.320 nvme0n1 00:25:19.320 21:40:42 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:19.320 21:40:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.320 21:40:42 -- common/autotest_common.sh@10 -- # set +x 00:25:19.320 21:40:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.320 21:40:42 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:19.320 21:40:42 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:19.320 Running I/O for 2 seconds... 00:25:19.320 [2024-04-24 21:40:42.185857] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.320 [2024-04-24 21:40:42.186552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.320 [2024-04-24 21:40:42.186583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.320 [2024-04-24 21:40:42.195432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.320 [2024-04-24 21:40:42.195658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.321 [2024-04-24 21:40:42.195684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.321 [2024-04-24 21:40:42.204805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.321 [2024-04-24 21:40:42.205026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.321 [2024-04-24 21:40:42.205049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.214393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.214630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.214659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.223733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.223970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.223992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.233065] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.233303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.233324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.242388] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.242628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.242650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.251687] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.251911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.251932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.261002] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.261230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.261251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.270278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.270511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.270533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.279568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.279801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.279821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.288898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.289131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.289151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.298194] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.298430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.298455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.307434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.307674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.307694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.316738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.316969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.316990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.325989] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.326223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.326243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.335293] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.335524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.335544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.344560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.344789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.344810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.353838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.354068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.354088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.363116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.363346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.363366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.372421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.372662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.372682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.381688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.381948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.381968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.391040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.391267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.391286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.400297] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.400533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.400554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.409584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.409821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.409841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.418897] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.419129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.419149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.428194] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.428431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.428455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.578 [2024-04-24 21:40:42.437661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.578 [2024-04-24 21:40:42.437897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.578 [2024-04-24 21:40:42.437916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.579 [2024-04-24 21:40:42.447288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.579 [2024-04-24 21:40:42.447526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.579 [2024-04-24 21:40:42.447547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.579 [2024-04-24 21:40:42.456612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.579 [2024-04-24 21:40:42.456850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.579 [2024-04-24 21:40:42.456870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.835 [2024-04-24 21:40:42.466062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.835 [2024-04-24 21:40:42.466299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.835 [2024-04-24 21:40:42.466324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.835 [2024-04-24 21:40:42.475440] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.835 [2024-04-24 21:40:42.475684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.835 [2024-04-24 21:40:42.475708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.835 [2024-04-24 21:40:42.484720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.835 [2024-04-24 21:40:42.484953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.835 [2024-04-24 21:40:42.484974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.835 [2024-04-24 21:40:42.494079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.835 [2024-04-24 21:40:42.494311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.835 [2024-04-24 21:40:42.494331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.835 [2024-04-24 21:40:42.503447] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.835 [2024-04-24 21:40:42.503686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.835 [2024-04-24 21:40:42.503707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.835 [2024-04-24 21:40:42.512875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.835 [2024-04-24 21:40:42.513110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.835 [2024-04-24 21:40:42.513129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.835 [2024-04-24 21:40:42.522163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.835 [2024-04-24 21:40:42.522397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.835 [2024-04-24 21:40:42.522417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.835 [2024-04-24 21:40:42.531441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.835 [2024-04-24 21:40:42.531677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.835 [2024-04-24 21:40:42.531698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.835 [2024-04-24 21:40:42.540718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.835 [2024-04-24 21:40:42.540947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.835 [2024-04-24 21:40:42.540970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.835 [2024-04-24 21:40:42.549988] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.836 [2024-04-24 21:40:42.550217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.836 [2024-04-24 21:40:42.550237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.836 [2024-04-24 21:40:42.559274] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.836 [2024-04-24 21:40:42.559502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.836 [2024-04-24 21:40:42.559522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.836 [2024-04-24 21:40:42.568483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.836 [2024-04-24 21:40:42.568715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.836 [2024-04-24 21:40:42.568735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.836 [2024-04-24 21:40:42.577730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.836 [2024-04-24 21:40:42.577955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.836 [2024-04-24 21:40:42.577974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.836 [2024-04-24 21:40:42.586984] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.836 [2024-04-24 21:40:42.587215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.836 [2024-04-24 21:40:42.587236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.836 [2024-04-24 21:40:42.596277] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.836 [2024-04-24 21:40:42.596511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.836 [2024-04-24 21:40:42.596531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.836 [2024-04-24 21:40:42.605534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.836 [2024-04-24 21:40:42.605766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.836 [2024-04-24 21:40:42.605785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.836 [2024-04-24 21:40:42.614814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.836 [2024-04-24 21:40:42.615046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.836 [2024-04-24 21:40:42.615066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.836 [2024-04-24 21:40:42.624082] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.836 [2024-04-24 21:40:42.624318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.836 [2024-04-24 21:40:42.624338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.836 [2024-04-24 21:40:42.633373] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.836 [2024-04-24 21:40:42.633611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.836 [2024-04-24 21:40:42.633631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.836 [2024-04-24 21:40:42.642690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.836 [2024-04-24 21:40:42.642919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.836 [2024-04-24 21:40:42.642939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.836 [2024-04-24 21:40:42.651937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.836 [2024-04-24 21:40:42.652164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.836 [2024-04-24 21:40:42.652184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.836 [2024-04-24 21:40:42.661222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.836 [2024-04-24 21:40:42.661454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.836 [2024-04-24 21:40:42.661474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.836 [2024-04-24 21:40:42.670508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.836 [2024-04-24 21:40:42.670736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.836 [2024-04-24 21:40:42.670755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.836 [2024-04-24 21:40:42.679751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.836 [2024-04-24 21:40:42.679981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.836 [2024-04-24 21:40:42.680001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.836 [2024-04-24 21:40:42.689064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.836 [2024-04-24 21:40:42.689297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.836 [2024-04-24 21:40:42.689316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.836 [2024-04-24 21:40:42.698517] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.836 [2024-04-24 21:40:42.698750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.836 [2024-04-24 21:40:42.698769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.836 [2024-04-24 21:40:42.707738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.836 [2024-04-24 21:40:42.707965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.836 [2024-04-24 21:40:42.707985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:19.836 [2024-04-24 21:40:42.716956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:19.836 [2024-04-24 21:40:42.717184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.836 [2024-04-24 21:40:42.717204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.093 [2024-04-24 21:40:42.726583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.093 [2024-04-24 21:40:42.726827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.093 [2024-04-24 21:40:42.726851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.093 [2024-04-24 21:40:42.735888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.093 [2024-04-24 21:40:42.736126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.093 [2024-04-24 21:40:42.736147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.093 [2024-04-24 21:40:42.745150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.093 [2024-04-24 21:40:42.745383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.093 [2024-04-24 21:40:42.745403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.093 [2024-04-24 21:40:42.754397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.093 [2024-04-24 21:40:42.754631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.093 [2024-04-24 21:40:42.754652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.093 [2024-04-24 21:40:42.763687] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.093 [2024-04-24 21:40:42.763916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.093 [2024-04-24 21:40:42.763935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.093 [2024-04-24 21:40:42.772944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.093 [2024-04-24 21:40:42.773176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.093 [2024-04-24 21:40:42.773197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.093 [2024-04-24 21:40:42.782208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.093 [2024-04-24 21:40:42.782443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.782470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.791510] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.791742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.791761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.800762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.800989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.801010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.810037] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.810267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.810286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.819308] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.819537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.819557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.828570] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.828802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.828822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.837774] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.838003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.838023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.847081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.847311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.847331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.856335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.856573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.856594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.865622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.865854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.865877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.874864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.875091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.875112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.884079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.884311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.884331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.893414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.893652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.893673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.902673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.902904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.902924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.911950] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.912184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.912204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.921252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.921486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.921505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.930523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.930754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.930774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.939786] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.940020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.940039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.949199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.949432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.949456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.958492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.958718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.958738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.967889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.968215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.968235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.094 [2024-04-24 21:40:42.977162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.094 [2024-04-24 21:40:42.977857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.094 [2024-04-24 21:40:42.977880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.351 [2024-04-24 21:40:42.986748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.351 [2024-04-24 21:40:42.987182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.351 [2024-04-24 21:40:42.987206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.351 [2024-04-24 21:40:42.996014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fdeb0 00:25:20.351 [2024-04-24 21:40:42.996817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.351 [2024-04-24 21:40:42.996838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.351 [2024-04-24 21:40:43.008795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fd640 00:25:20.351 [2024-04-24 21:40:43.010334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.351 [2024-04-24 21:40:43.010354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:20.351 [2024-04-24 21:40:43.020753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fe720 00:25:20.351 [2024-04-24 21:40:43.020968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.351 [2024-04-24 21:40:43.020988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:20.351 [2024-04-24 21:40:43.029946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fe720 00:25:20.351 [2024-04-24 21:40:43.031676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.351 [2024-04-24 21:40:43.031696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:20.351 [2024-04-24 21:40:43.043521] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fc560 00:25:20.351 [2024-04-24 21:40:43.044279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.351 [2024-04-24 21:40:43.044299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.351 [2024-04-24 21:40:43.052666] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f9f68 00:25:20.351 [2024-04-24 21:40:43.052862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.351 [2024-04-24 21:40:43.052882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:20.351 [2024-04-24 21:40:43.061892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f9f68 00:25:20.351 [2024-04-24 21:40:43.062337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.351 [2024-04-24 21:40:43.062357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:20.351 [2024-04-24 21:40:43.071093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f9f68 00:25:20.351 [2024-04-24 21:40:43.073547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.351 [2024-04-24 21:40:43.073567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:20.351 [2024-04-24 21:40:43.084472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fac10 00:25:20.351 [2024-04-24 21:40:43.085228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.351 [2024-04-24 21:40:43.085248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.351 [2024-04-24 21:40:43.093006] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190fa7d8 00:25:20.351 [2024-04-24 21:40:43.093867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.351 [2024-04-24 21:40:43.093887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:20.351 [2024-04-24 21:40:43.101929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f8618 00:25:20.351 [2024-04-24 21:40:43.102972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.351 [2024-04-24 21:40:43.102998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:20.351 [2024-04-24 21:40:43.112031] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f0bc0 00:25:20.351 [2024-04-24 21:40:43.113265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.351 [2024-04-24 21:40:43.113284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:20.351 [2024-04-24 21:40:43.119930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f3e60 00:25:20.351 [2024-04-24 21:40:43.122612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.351 [2024-04-24 21:40:43.122635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:20.351 [2024-04-24 21:40:43.133669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f46d0 00:25:20.351 [2024-04-24 21:40:43.134345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.351 [2024-04-24 21:40:43.134365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:20.351 [2024-04-24 21:40:43.143428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190eb760 00:25:20.351 [2024-04-24 21:40:43.144779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.351 [2024-04-24 21:40:43.144800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:20.352 [2024-04-24 21:40:43.152314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190ecc78 00:25:20.352 [2024-04-24 21:40:43.153685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.352 [2024-04-24 21:40:43.153706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:20.352 [2024-04-24 21:40:43.160237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190eb760 00:25:20.352 [2024-04-24 21:40:43.162356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.352 [2024-04-24 21:40:43.162376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:20.352 [2024-04-24 21:40:43.174467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190ebfd0 00:25:20.352 [2024-04-24 21:40:43.175972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.352 [2024-04-24 21:40:43.175992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:20.352 [2024-04-24 21:40:43.184301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190ebfd0 00:25:20.352 [2024-04-24 21:40:43.185189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.352 [2024-04-24 21:40:43.185208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:20.352 [2024-04-24 21:40:43.193797] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190ebfd0 00:25:20.352 [2024-04-24 21:40:43.194004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.352 [2024-04-24 21:40:43.194023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:20.352 [2024-04-24 21:40:43.203347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190ebfd0 00:25:20.352 [2024-04-24 21:40:43.203925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.352 [2024-04-24 21:40:43.203945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:20.352 [2024-04-24 21:40:43.213004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190eb760 00:25:20.352 [2024-04-24 21:40:43.215745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.352 [2024-04-24 21:40:43.215765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.352 [2024-04-24 21:40:43.229412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190ed0b0 00:25:20.352 [2024-04-24 21:40:43.230506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.352 [2024-04-24 21:40:43.230526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:20.609 [2024-04-24 21:40:43.238486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6458 00:25:20.609 [2024-04-24 21:40:43.239057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.609 [2024-04-24 21:40:43.239080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.609 [2024-04-24 21:40:43.248179] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f5be8 00:25:20.609 [2024-04-24 21:40:43.248889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.609 [2024-04-24 21:40:43.248914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.609 [2024-04-24 21:40:43.257279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190eee38 00:25:20.609 [2024-04-24 21:40:43.258339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.609 [2024-04-24 21:40:43.258361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:20.609 [2024-04-24 21:40:43.266215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190eee38 00:25:20.609 [2024-04-24 21:40:43.267254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.609 [2024-04-24 21:40:43.267275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:20.609 [2024-04-24 21:40:43.275103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190eee38 00:25:20.609 [2024-04-24 21:40:43.276144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.609 [2024-04-24 21:40:43.276165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:20.609 [2024-04-24 21:40:43.284013] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190eee38 00:25:20.609 [2024-04-24 21:40:43.285093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.609 [2024-04-24 21:40:43.285114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:20.609 [2024-04-24 21:40:43.292949] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190eee38 00:25:20.609 [2024-04-24 21:40:43.293979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.609 [2024-04-24 21:40:43.294000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:20.609 [2024-04-24 21:40:43.301835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190eee38 00:25:20.609 [2024-04-24 21:40:43.302893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.609 [2024-04-24 21:40:43.302913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:20.609 [2024-04-24 21:40:43.310741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190eee38 00:25:20.609 [2024-04-24 21:40:43.311805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.609 [2024-04-24 21:40:43.311826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:20.609 [2024-04-24 21:40:43.319646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190eee38 00:25:20.609 [2024-04-24 21:40:43.320704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.609 [2024-04-24 21:40:43.320725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:20.609 [2024-04-24 21:40:43.328736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190eee38 00:25:20.609 [2024-04-24 21:40:43.329953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.609 [2024-04-24 21:40:43.329973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:20.609 [2024-04-24 21:40:43.340501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190ea680 00:25:20.609 [2024-04-24 21:40:43.341834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.609 [2024-04-24 21:40:43.341855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:20.609 [2024-04-24 21:40:43.351719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.609 [2024-04-24 21:40:43.352118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.609 [2024-04-24 21:40:43.352139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.610 [2024-04-24 21:40:43.361063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.610 [2024-04-24 21:40:43.361284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.610 [2024-04-24 21:40:43.361303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.610 [2024-04-24 21:40:43.370393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.610 [2024-04-24 21:40:43.370624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.610 [2024-04-24 21:40:43.370644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.610 [2024-04-24 21:40:43.379677] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.610 [2024-04-24 21:40:43.379902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.610 [2024-04-24 21:40:43.379922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.610 [2024-04-24 21:40:43.389027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.610 [2024-04-24 21:40:43.389249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.610 [2024-04-24 21:40:43.389269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.610 [2024-04-24 21:40:43.398342] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.610 [2024-04-24 21:40:43.398578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.610 [2024-04-24 21:40:43.398599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.610 [2024-04-24 21:40:43.407666] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.610 [2024-04-24 21:40:43.407897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.610 [2024-04-24 21:40:43.407917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.610 [2024-04-24 21:40:43.417017] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.610 [2024-04-24 21:40:43.417256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.610 [2024-04-24 21:40:43.417276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.610 [2024-04-24 21:40:43.426362] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.610 [2024-04-24 21:40:43.426605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.610 [2024-04-24 21:40:43.426625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.610 [2024-04-24 21:40:43.435626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.610 [2024-04-24 21:40:43.435858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.610 [2024-04-24 21:40:43.435878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.610 [2024-04-24 21:40:43.444927] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.610 [2024-04-24 21:40:43.445161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.610 [2024-04-24 21:40:43.445180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.610 [2024-04-24 21:40:43.454533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.610 [2024-04-24 21:40:43.454767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.610 [2024-04-24 21:40:43.454788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.610 [2024-04-24 21:40:43.463862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.610 [2024-04-24 21:40:43.464093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.610 [2024-04-24 21:40:43.464116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.610 [2024-04-24 21:40:43.473207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.610 [2024-04-24 21:40:43.473439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.610 [2024-04-24 21:40:43.473466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.610 [2024-04-24 21:40:43.482495] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.610 [2024-04-24 21:40:43.482726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.610 [2024-04-24 21:40:43.482747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.610 [2024-04-24 21:40:43.491775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.610 [2024-04-24 21:40:43.492008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.610 [2024-04-24 21:40:43.492028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.868 [2024-04-24 21:40:43.501357] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.868 [2024-04-24 21:40:43.501600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.868 [2024-04-24 21:40:43.501625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.868 [2024-04-24 21:40:43.510652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.868 [2024-04-24 21:40:43.510885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.868 [2024-04-24 21:40:43.510907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.868 [2024-04-24 21:40:43.519955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.868 [2024-04-24 21:40:43.520183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.868 [2024-04-24 21:40:43.520204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.868 [2024-04-24 21:40:43.529250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.868 [2024-04-24 21:40:43.529484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.868 [2024-04-24 21:40:43.529504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.868 [2024-04-24 21:40:43.538567] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.868 [2024-04-24 21:40:43.538800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.868 [2024-04-24 21:40:43.538821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.868 [2024-04-24 21:40:43.547897] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.868 [2024-04-24 21:40:43.548134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.868 [2024-04-24 21:40:43.548155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.868 [2024-04-24 21:40:43.557154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.868 [2024-04-24 21:40:43.557387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.868 [2024-04-24 21:40:43.557409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.868 [2024-04-24 21:40:43.566532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.868 [2024-04-24 21:40:43.566767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.868 [2024-04-24 21:40:43.566787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.868 [2024-04-24 21:40:43.575926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.868 [2024-04-24 21:40:43.576163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.868 [2024-04-24 21:40:43.576184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.868 [2024-04-24 21:40:43.585358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.868 [2024-04-24 21:40:43.585601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.868 [2024-04-24 21:40:43.585621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.868 [2024-04-24 21:40:43.594675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.868 [2024-04-24 21:40:43.594912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.868 [2024-04-24 21:40:43.594933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.868 [2024-04-24 21:40:43.604002] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.868 [2024-04-24 21:40:43.604263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.868 [2024-04-24 21:40:43.604283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.868 [2024-04-24 21:40:43.613214] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.868 [2024-04-24 21:40:43.613456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.868 [2024-04-24 21:40:43.613477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.868 [2024-04-24 21:40:43.622494] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.868 [2024-04-24 21:40:43.622726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.868 [2024-04-24 21:40:43.622745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.868 [2024-04-24 21:40:43.631794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.868 [2024-04-24 21:40:43.632028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.868 [2024-04-24 21:40:43.632049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.868 [2024-04-24 21:40:43.641110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.868 [2024-04-24 21:40:43.641343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.868 [2024-04-24 21:40:43.641363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.868 [2024-04-24 21:40:43.650577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.868 [2024-04-24 21:40:43.650791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.868 [2024-04-24 21:40:43.650811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.869 [2024-04-24 21:40:43.660045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.869 [2024-04-24 21:40:43.660262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.869 [2024-04-24 21:40:43.660282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.869 [2024-04-24 21:40:43.669427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.869 [2024-04-24 21:40:43.669666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.869 [2024-04-24 21:40:43.669686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.869 [2024-04-24 21:40:43.678746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.869 [2024-04-24 21:40:43.678985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.869 [2024-04-24 21:40:43.679005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.869 [2024-04-24 21:40:43.688080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.869 [2024-04-24 21:40:43.688310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.869 [2024-04-24 21:40:43.688330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.869 [2024-04-24 21:40:43.697429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.869 [2024-04-24 21:40:43.697667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.869 [2024-04-24 21:40:43.697687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.869 [2024-04-24 21:40:43.706916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.869 [2024-04-24 21:40:43.707150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.869 [2024-04-24 21:40:43.707173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.869 [2024-04-24 21:40:43.716215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.869 [2024-04-24 21:40:43.716459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.869 [2024-04-24 21:40:43.716479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.869 [2024-04-24 21:40:43.725529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.869 [2024-04-24 21:40:43.725764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.869 [2024-04-24 21:40:43.725784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.869 [2024-04-24 21:40:43.734813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.869 [2024-04-24 21:40:43.735046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.869 [2024-04-24 21:40:43.735066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.869 [2024-04-24 21:40:43.744125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.869 [2024-04-24 21:40:43.744357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.869 [2024-04-24 21:40:43.744378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:20.869 [2024-04-24 21:40:43.753568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:20.869 [2024-04-24 21:40:43.753805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.869 [2024-04-24 21:40:43.753829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:21.127 [2024-04-24 21:40:43.763081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:21.127 [2024-04-24 21:40:43.763316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.127 [2024-04-24 21:40:43.763339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:21.127 [2024-04-24 21:40:43.772377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:21.127 [2024-04-24 21:40:43.772634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.127 [2024-04-24 21:40:43.772656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:21.127 [2024-04-24 21:40:43.781686] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:21.127 [2024-04-24 21:40:43.781916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.127 [2024-04-24 21:40:43.781936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:21.127 [2024-04-24 21:40:43.790995] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:21.127 [2024-04-24 21:40:43.791227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.127 [2024-04-24 21:40:43.791248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:21.127 [2024-04-24 21:40:43.800306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:21.127 [2024-04-24 21:40:43.800541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.127 [2024-04-24 21:40:43.800561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:21.127 [2024-04-24 21:40:43.809604] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:21.127 [2024-04-24 21:40:43.809835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.127 [2024-04-24 21:40:43.809855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:21.127 [2024-04-24 21:40:43.818913] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:21.127 [2024-04-24 21:40:43.819147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.127 [2024-04-24 21:40:43.819167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:21.127 [2024-04-24 21:40:43.828200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:21.127 [2024-04-24 21:40:43.828436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.127 [2024-04-24 21:40:43.828461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:21.127 [2024-04-24 21:40:43.837508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:21.127 [2024-04-24 21:40:43.837745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.127 [2024-04-24 21:40:43.837766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:21.127 [2024-04-24 21:40:43.846819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:21.127 [2024-04-24 21:40:43.847053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.127 [2024-04-24 21:40:43.847073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:21.127 [2024-04-24 21:40:43.856180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:21.127 [2024-04-24 21:40:43.856415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.127 [2024-04-24 21:40:43.856435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:21.127 [2024-04-24 21:40:43.865530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:21.127 [2024-04-24 21:40:43.865756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.127 [2024-04-24 21:40:43.865776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:21.127 [2024-04-24 21:40:43.874803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:21.127 [2024-04-24 21:40:43.875394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.127 [2024-04-24 21:40:43.875413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:21.127 [2024-04-24 21:40:43.883981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190edd58 00:25:21.127 [2024-04-24 21:40:43.885568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.127 [2024-04-24 21:40:43.885587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:21.127 [2024-04-24 21:40:43.894106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190ee5c8 00:25:21.127 [2024-04-24 21:40:43.894354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.127 [2024-04-24 21:40:43.894375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:21.127 [2024-04-24 21:40:43.903515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190ee5c8 00:25:21.127 [2024-04-24 21:40:43.903718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.127 [2024-04-24 21:40:43.903738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:21.127 [2024-04-24 21:40:43.912806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190ee5c8 00:25:21.127 [2024-04-24 21:40:43.913008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.127 [2024-04-24 21:40:43.913028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:21.128 [2024-04-24 21:40:43.922091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190ee5c8 00:25:21.128 [2024-04-24 21:40:43.922298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.128 [2024-04-24 21:40:43.922316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:21.128 [2024-04-24 21:40:43.931375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190ee5c8 00:25:21.128 [2024-04-24 21:40:43.931591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.128 [2024-04-24 21:40:43.931610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:21.128 [2024-04-24 21:40:43.940668] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190ee5c8 00:25:21.128 [2024-04-24 21:40:43.940864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.128 [2024-04-24 21:40:43.940883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:21.128 [2024-04-24 21:40:43.949956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190ee5c8 00:25:21.128 [2024-04-24 21:40:43.950552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.128 [2024-04-24 21:40:43.950575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:21.128 [2024-04-24 21:40:43.961875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190ed920 00:25:21.128 [2024-04-24 21:40:43.963100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.128 [2024-04-24 21:40:43.963121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:21.128 [2024-04-24 21:40:43.971951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.128 [2024-04-24 21:40:43.972195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.128 [2024-04-24 21:40:43.972216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.128 [2024-04-24 21:40:43.981253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.128 [2024-04-24 21:40:43.981486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.128 [2024-04-24 21:40:43.981505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.128 [2024-04-24 21:40:43.990556] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.128 [2024-04-24 21:40:43.990797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.128 [2024-04-24 21:40:43.990818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.128 [2024-04-24 21:40:43.999843] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.128 [2024-04-24 21:40:44.000091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.128 [2024-04-24 21:40:44.000111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.128 [2024-04-24 21:40:44.009102] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.128 [2024-04-24 21:40:44.009349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.128 [2024-04-24 21:40:44.009369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.385 [2024-04-24 21:40:44.018681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.385 [2024-04-24 21:40:44.018935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.385 [2024-04-24 21:40:44.018960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.385 [2024-04-24 21:40:44.027969] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.385 [2024-04-24 21:40:44.028218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.385 [2024-04-24 21:40:44.028239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.385 [2024-04-24 21:40:44.037231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.385 [2024-04-24 21:40:44.037481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.385 [2024-04-24 21:40:44.037502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.385 [2024-04-24 21:40:44.046538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.385 [2024-04-24 21:40:44.046785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.385 [2024-04-24 21:40:44.046805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.385 [2024-04-24 21:40:44.055819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.385 [2024-04-24 21:40:44.056064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.385 [2024-04-24 21:40:44.056084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.386 [2024-04-24 21:40:44.065061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.386 [2024-04-24 21:40:44.065308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.386 [2024-04-24 21:40:44.065328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.386 [2024-04-24 21:40:44.074356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.386 [2024-04-24 21:40:44.074618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.386 [2024-04-24 21:40:44.074637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.386 [2024-04-24 21:40:44.083592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.386 [2024-04-24 21:40:44.083838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.386 [2024-04-24 21:40:44.083858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.386 [2024-04-24 21:40:44.092961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.386 [2024-04-24 21:40:44.093208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.386 [2024-04-24 21:40:44.093228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.386 [2024-04-24 21:40:44.102238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.386 [2024-04-24 21:40:44.102485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.386 [2024-04-24 21:40:44.102507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.386 [2024-04-24 21:40:44.111470] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.386 [2024-04-24 21:40:44.111729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.386 [2024-04-24 21:40:44.111749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.386 [2024-04-24 21:40:44.120776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.386 [2024-04-24 21:40:44.121019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.386 [2024-04-24 21:40:44.121040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.386 [2024-04-24 21:40:44.130054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.386 [2024-04-24 21:40:44.130299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.386 [2024-04-24 21:40:44.130319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.386 [2024-04-24 21:40:44.139323] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.386 [2024-04-24 21:40:44.139576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.386 [2024-04-24 21:40:44.139596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.386 [2024-04-24 21:40:44.148594] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.386 [2024-04-24 21:40:44.148843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.386 [2024-04-24 21:40:44.148863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.386 [2024-04-24 21:40:44.157856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6dadb0) with pdu=0x2000190f6890 00:25:21.386 [2024-04-24 21:40:44.158098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.386 [2024-04-24 21:40:44.158118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.386 00:25:21.386 Latency(us) 00:25:21.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.386 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:21.386 nvme0n1 : 2.00 26607.30 103.93 0.00 0.00 4802.59 2726.30 28101.84 00:25:21.386 =================================================================================================================== 00:25:21.386 Total : 26607.30 103.93 0.00 0.00 4802.59 2726.30 28101.84 00:25:21.386 0 00:25:21.386 21:40:44 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:21.386 21:40:44 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:21.386 21:40:44 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:21.386 | .driver_specific 00:25:21.386 | .nvme_error 00:25:21.386 | .status_code 00:25:21.386 | .command_transient_transport_error' 00:25:21.386 21:40:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:21.643 21:40:44 -- host/digest.sh@71 -- # (( 209 > 0 )) 00:25:21.643 21:40:44 -- host/digest.sh@73 -- # killprocess 2991406 00:25:21.643 21:40:44 -- common/autotest_common.sh@936 -- # '[' -z 2991406 ']' 00:25:21.643 21:40:44 -- common/autotest_common.sh@940 -- # kill -0 2991406 00:25:21.643 21:40:44 -- common/autotest_common.sh@941 -- # uname 00:25:21.643 21:40:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:21.643 21:40:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2991406 00:25:21.643 21:40:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:21.643 21:40:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:21.643 21:40:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2991406' 00:25:21.643 killing process with pid 2991406 00:25:21.643 21:40:44 -- common/autotest_common.sh@955 -- # kill 2991406 00:25:21.643 Received shutdown signal, test time was about 2.000000 seconds 00:25:21.643 00:25:21.643 Latency(us) 00:25:21.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.643 =================================================================================================================== 00:25:21.643 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:21.643 21:40:44 -- common/autotest_common.sh@960 -- # wait 2991406 00:25:21.900 21:40:44 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:21.900 21:40:44 -- host/digest.sh@54 -- # local rw bs qd 00:25:21.900 21:40:44 -- host/digest.sh@56 -- # rw=randwrite 00:25:21.900 21:40:44 -- host/digest.sh@56 -- # bs=131072 00:25:21.900 21:40:44 -- host/digest.sh@56 -- # qd=16 00:25:21.900 21:40:44 -- host/digest.sh@58 -- # bperfpid=2991958 00:25:21.900 21:40:44 -- host/digest.sh@60 -- # waitforlisten 2991958 /var/tmp/bperf.sock 00:25:21.900 21:40:44 -- common/autotest_common.sh@817 -- # '[' -z 2991958 ']' 00:25:21.900 21:40:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:21.900 21:40:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:21.900 21:40:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:21.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:21.900 21:40:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:21.900 21:40:44 -- common/autotest_common.sh@10 -- # set +x 00:25:21.900 21:40:44 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:21.900 [2024-04-24 21:40:44.644379] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:25:21.900 [2024-04-24 21:40:44.644431] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2991958 ] 00:25:21.900 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:21.900 Zero copy mechanism will not be used. 00:25:21.900 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.900 [2024-04-24 21:40:44.717116] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.156 [2024-04-24 21:40:44.791163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.767 21:40:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:22.767 21:40:45 -- common/autotest_common.sh@850 -- # return 0 00:25:22.767 21:40:45 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:22.767 21:40:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:22.767 21:40:45 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:22.767 21:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.767 21:40:45 -- common/autotest_common.sh@10 -- # set +x 00:25:23.029 21:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.029 21:40:45 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:23.029 21:40:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:23.284 nvme0n1 00:25:23.284 21:40:45 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:23.284 21:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.284 21:40:45 -- common/autotest_common.sh@10 -- # set +x 00:25:23.284 21:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.284 21:40:45 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:23.284 21:40:45 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:23.284 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:23.284 Zero copy mechanism will not be used. 00:25:23.284 Running I/O for 2 seconds... 00:25:23.284 [2024-04-24 21:40:46.110922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.284 [2024-04-24 21:40:46.111489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.284 [2024-04-24 21:40:46.111519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.284 [2024-04-24 21:40:46.127011] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.284 [2024-04-24 21:40:46.127585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.284 [2024-04-24 21:40:46.127610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.284 [2024-04-24 21:40:46.143178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.284 [2024-04-24 21:40:46.143657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.284 [2024-04-24 21:40:46.143681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.284 [2024-04-24 21:40:46.159924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.284 [2024-04-24 21:40:46.160250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.284 [2024-04-24 21:40:46.160273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.540 [2024-04-24 21:40:46.175979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.540 [2024-04-24 21:40:46.176527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.540 [2024-04-24 21:40:46.176553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.540 [2024-04-24 21:40:46.192935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.540 [2024-04-24 21:40:46.193419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.540 [2024-04-24 21:40:46.193442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.540 [2024-04-24 21:40:46.210338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.540 [2024-04-24 21:40:46.210833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.540 [2024-04-24 21:40:46.210856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.540 [2024-04-24 21:40:46.227239] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.540 [2024-04-24 21:40:46.227852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.540 [2024-04-24 21:40:46.227873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.540 [2024-04-24 21:40:46.244421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.540 [2024-04-24 21:40:46.245098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.540 [2024-04-24 21:40:46.245119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.540 [2024-04-24 21:40:46.262589] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.540 [2024-04-24 21:40:46.262999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.540 [2024-04-24 21:40:46.263021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.540 [2024-04-24 21:40:46.279255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.540 [2024-04-24 21:40:46.279693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.540 [2024-04-24 21:40:46.279714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.540 [2024-04-24 21:40:46.296020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.540 [2024-04-24 21:40:46.296547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.540 [2024-04-24 21:40:46.296569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.540 [2024-04-24 21:40:46.312592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.540 [2024-04-24 21:40:46.312988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.540 [2024-04-24 21:40:46.313009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.540 [2024-04-24 21:40:46.329540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.540 [2024-04-24 21:40:46.330167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.540 [2024-04-24 21:40:46.330188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.540 [2024-04-24 21:40:46.348420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.540 [2024-04-24 21:40:46.348974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.540 [2024-04-24 21:40:46.348995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.540 [2024-04-24 21:40:46.365513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.540 [2024-04-24 21:40:46.366004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.540 [2024-04-24 21:40:46.366025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.540 [2024-04-24 21:40:46.384088] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.540 [2024-04-24 21:40:46.384621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.540 [2024-04-24 21:40:46.384642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.540 [2024-04-24 21:40:46.402835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.540 [2024-04-24 21:40:46.403462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.540 [2024-04-24 21:40:46.403483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.540 [2024-04-24 21:40:46.418561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.540 [2024-04-24 21:40:46.419178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.540 [2024-04-24 21:40:46.419200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.797 [2024-04-24 21:40:46.437229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.797 [2024-04-24 21:40:46.437730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.797 [2024-04-24 21:40:46.437755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.797 [2024-04-24 21:40:46.454542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.797 [2024-04-24 21:40:46.455046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.797 [2024-04-24 21:40:46.455068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.797 [2024-04-24 21:40:46.471954] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.797 [2024-04-24 21:40:46.472448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.797 [2024-04-24 21:40:46.472476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.797 [2024-04-24 21:40:46.490093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.797 [2024-04-24 21:40:46.490630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.797 [2024-04-24 21:40:46.490651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.797 [2024-04-24 21:40:46.508245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.797 [2024-04-24 21:40:46.508858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.797 [2024-04-24 21:40:46.508879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.797 [2024-04-24 21:40:46.525372] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.797 [2024-04-24 21:40:46.525673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.797 [2024-04-24 21:40:46.525694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.797 [2024-04-24 21:40:46.543377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.797 [2024-04-24 21:40:46.544060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.797 [2024-04-24 21:40:46.544086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.797 [2024-04-24 21:40:46.560698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.797 [2024-04-24 21:40:46.561191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.797 [2024-04-24 21:40:46.561213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.797 [2024-04-24 21:40:46.579147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.797 [2024-04-24 21:40:46.579722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.797 [2024-04-24 21:40:46.579743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.797 [2024-04-24 21:40:46.598513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.797 [2024-04-24 21:40:46.599061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.797 [2024-04-24 21:40:46.599082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.797 [2024-04-24 21:40:46.615552] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.797 [2024-04-24 21:40:46.616127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.797 [2024-04-24 21:40:46.616149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.797 [2024-04-24 21:40:46.633568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.797 [2024-04-24 21:40:46.634127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.797 [2024-04-24 21:40:46.634149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.797 [2024-04-24 21:40:46.651952] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.797 [2024-04-24 21:40:46.652587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.797 [2024-04-24 21:40:46.652609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.797 [2024-04-24 21:40:46.669310] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:23.797 [2024-04-24 21:40:46.669707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.797 [2024-04-24 21:40:46.669728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.054 [2024-04-24 21:40:46.687427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.054 [2024-04-24 21:40:46.687984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.054 [2024-04-24 21:40:46.688009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.054 [2024-04-24 21:40:46.705619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.054 [2024-04-24 21:40:46.706182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.054 [2024-04-24 21:40:46.706205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.054 [2024-04-24 21:40:46.724029] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.054 [2024-04-24 21:40:46.724497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.054 [2024-04-24 21:40:46.724519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.054 [2024-04-24 21:40:46.742708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.054 [2024-04-24 21:40:46.743269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.054 [2024-04-24 21:40:46.743290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.054 [2024-04-24 21:40:46.761433] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.054 [2024-04-24 21:40:46.761971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.054 [2024-04-24 21:40:46.761993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.054 [2024-04-24 21:40:46.779476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.054 [2024-04-24 21:40:46.779858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.054 [2024-04-24 21:40:46.779879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.054 [2024-04-24 21:40:46.797446] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.054 [2024-04-24 21:40:46.797919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.054 [2024-04-24 21:40:46.797940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.054 [2024-04-24 21:40:46.814784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.054 [2024-04-24 21:40:46.815328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.054 [2024-04-24 21:40:46.815349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.055 [2024-04-24 21:40:46.832867] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.055 [2024-04-24 21:40:46.833359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.055 [2024-04-24 21:40:46.833380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.055 [2024-04-24 21:40:46.851420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.055 [2024-04-24 21:40:46.852103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.055 [2024-04-24 21:40:46.852125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.055 [2024-04-24 21:40:46.869184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.055 [2024-04-24 21:40:46.869810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.055 [2024-04-24 21:40:46.869831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.055 [2024-04-24 21:40:46.885197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.055 [2024-04-24 21:40:46.885683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.055 [2024-04-24 21:40:46.885704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.055 [2024-04-24 21:40:46.903311] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.055 [2024-04-24 21:40:46.903800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.055 [2024-04-24 21:40:46.903821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.055 [2024-04-24 21:40:46.921025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.055 [2024-04-24 21:40:46.921590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.055 [2024-04-24 21:40:46.921611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.055 [2024-04-24 21:40:46.938941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.055 [2024-04-24 21:40:46.939386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.055 [2024-04-24 21:40:46.939409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.312 [2024-04-24 21:40:46.955471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.312 [2024-04-24 21:40:46.955861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-04-24 21:40:46.955886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.312 [2024-04-24 21:40:46.973011] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.312 [2024-04-24 21:40:46.973487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-04-24 21:40:46.973509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.312 [2024-04-24 21:40:46.990677] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.312 [2024-04-24 21:40:46.991446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-04-24 21:40:46.991472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.312 [2024-04-24 21:40:47.008431] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.312 [2024-04-24 21:40:47.008970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.312 [2024-04-24 21:40:47.008997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.312 [2024-04-24 21:40:47.024934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.313 [2024-04-24 21:40:47.025344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-04-24 21:40:47.025366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.313 [2024-04-24 21:40:47.041725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.313 [2024-04-24 21:40:47.042358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-04-24 21:40:47.042379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.313 [2024-04-24 21:40:47.058367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.313 [2024-04-24 21:40:47.058833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-04-24 21:40:47.058855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.313 [2024-04-24 21:40:47.075637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.313 [2024-04-24 21:40:47.075966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-04-24 21:40:47.075987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.313 [2024-04-24 21:40:47.093410] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.313 [2024-04-24 21:40:47.093986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-04-24 21:40:47.094008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.313 [2024-04-24 21:40:47.111222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.313 [2024-04-24 21:40:47.111891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-04-24 21:40:47.111913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.313 [2024-04-24 21:40:47.127898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.313 [2024-04-24 21:40:47.128437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-04-24 21:40:47.128462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.313 [2024-04-24 21:40:47.145520] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.313 [2024-04-24 21:40:47.145992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-04-24 21:40:47.146014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.313 [2024-04-24 21:40:47.162835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.313 [2024-04-24 21:40:47.163257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-04-24 21:40:47.163279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.313 [2024-04-24 21:40:47.180962] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.313 [2024-04-24 21:40:47.181506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-04-24 21:40:47.181528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.313 [2024-04-24 21:40:47.197849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.313 [2024-04-24 21:40:47.198172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.313 [2024-04-24 21:40:47.198196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.569 [2024-04-24 21:40:47.215086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.569 [2024-04-24 21:40:47.215587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.569 [2024-04-24 21:40:47.215612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.569 [2024-04-24 21:40:47.233202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.569 [2024-04-24 21:40:47.233625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.569 [2024-04-24 21:40:47.233647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.569 [2024-04-24 21:40:47.251511] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.569 [2024-04-24 21:40:47.252052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.569 [2024-04-24 21:40:47.252073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.569 [2024-04-24 21:40:47.268334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.569 [2024-04-24 21:40:47.268956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.569 [2024-04-24 21:40:47.268977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.569 [2024-04-24 21:40:47.286331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.569 [2024-04-24 21:40:47.286765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.569 [2024-04-24 21:40:47.286786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.569 [2024-04-24 21:40:47.304364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.569 [2024-04-24 21:40:47.304947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.569 [2024-04-24 21:40:47.304968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.569 [2024-04-24 21:40:47.323153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.569 [2024-04-24 21:40:47.323713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.569 [2024-04-24 21:40:47.323734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.569 [2024-04-24 21:40:47.340847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.569 [2024-04-24 21:40:47.341331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.569 [2024-04-24 21:40:47.341352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.569 [2024-04-24 21:40:47.358253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.569 [2024-04-24 21:40:47.358917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.569 [2024-04-24 21:40:47.358938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.569 [2024-04-24 21:40:47.377565] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.569 [2024-04-24 21:40:47.378110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.569 [2024-04-24 21:40:47.378131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.569 [2024-04-24 21:40:47.397728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.569 [2024-04-24 21:40:47.398116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.569 [2024-04-24 21:40:47.398137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.569 [2024-04-24 21:40:47.424291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.569 [2024-04-24 21:40:47.424909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.569 [2024-04-24 21:40:47.424930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.569 [2024-04-24 21:40:47.444855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.569 [2024-04-24 21:40:47.445335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.569 [2024-04-24 21:40:47.445357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.825 [2024-04-24 21:40:47.462794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.825 [2024-04-24 21:40:47.463282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.825 [2024-04-24 21:40:47.463307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.825 [2024-04-24 21:40:47.481596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.825 [2024-04-24 21:40:47.482155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.826 [2024-04-24 21:40:47.482181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.826 [2024-04-24 21:40:47.498130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.826 [2024-04-24 21:40:47.498564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.826 [2024-04-24 21:40:47.498585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.826 [2024-04-24 21:40:47.515864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.826 [2024-04-24 21:40:47.516414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.826 [2024-04-24 21:40:47.516436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.826 [2024-04-24 21:40:47.535860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.826 [2024-04-24 21:40:47.536339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.826 [2024-04-24 21:40:47.536361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.826 [2024-04-24 21:40:47.553184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.826 [2024-04-24 21:40:47.553856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.826 [2024-04-24 21:40:47.553878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.826 [2024-04-24 21:40:47.573255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.826 [2024-04-24 21:40:47.573865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.826 [2024-04-24 21:40:47.573886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.826 [2024-04-24 21:40:47.591968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.826 [2024-04-24 21:40:47.592233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.826 [2024-04-24 21:40:47.592254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.826 [2024-04-24 21:40:47.610955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.826 [2024-04-24 21:40:47.611380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.826 [2024-04-24 21:40:47.611401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.826 [2024-04-24 21:40:47.631188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.826 [2024-04-24 21:40:47.631812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.826 [2024-04-24 21:40:47.631834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:24.826 [2024-04-24 21:40:47.650272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.826 [2024-04-24 21:40:47.650689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.826 [2024-04-24 21:40:47.650710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:24.826 [2024-04-24 21:40:47.678996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.826 [2024-04-24 21:40:47.679328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.826 [2024-04-24 21:40:47.679349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.826 [2024-04-24 21:40:47.696442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:24.826 [2024-04-24 21:40:47.696920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.826 [2024-04-24 21:40:47.696943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.082 [2024-04-24 21:40:47.715597] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:25.082 [2024-04-24 21:40:47.716010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.082 [2024-04-24 21:40:47.716035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.082 [2024-04-24 21:40:47.734606] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:25.082 [2024-04-24 21:40:47.735152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.082 [2024-04-24 21:40:47.735174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.082 [2024-04-24 21:40:47.754208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:25.082 [2024-04-24 21:40:47.754773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.082 [2024-04-24 21:40:47.754795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.082 [2024-04-24 21:40:47.772051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:25.082 [2024-04-24 21:40:47.772600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.082 [2024-04-24 21:40:47.772622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.082 [2024-04-24 21:40:47.791065] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:25.082 [2024-04-24 21:40:47.791626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.082 [2024-04-24 21:40:47.791647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.082 [2024-04-24 21:40:47.809000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:25.082 [2024-04-24 21:40:47.809393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.082 [2024-04-24 21:40:47.809414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.082 [2024-04-24 21:40:47.829062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:25.082 [2024-04-24 21:40:47.829565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.082 [2024-04-24 21:40:47.829587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.082 [2024-04-24 21:40:47.848571] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:25.082 [2024-04-24 21:40:47.849218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.082 [2024-04-24 21:40:47.849239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.082 [2024-04-24 21:40:47.867789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:25.082 [2024-04-24 21:40:47.868113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.082 [2024-04-24 21:40:47.868134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.082 [2024-04-24 21:40:47.885141] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:25.082 [2024-04-24 21:40:47.885747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.082 [2024-04-24 21:40:47.885768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.082 [2024-04-24 21:40:47.904319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:25.082 [2024-04-24 21:40:47.904879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.082 [2024-04-24 21:40:47.904900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.082 [2024-04-24 21:40:47.923527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:25.082 [2024-04-24 21:40:47.923926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.082 [2024-04-24 21:40:47.923946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.082 [2024-04-24 21:40:47.950481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:25.082 [2024-04-24 21:40:47.951057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.082 [2024-04-24 21:40:47.951079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.082 [2024-04-24 21:40:47.968148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:25.083 [2024-04-24 21:40:47.968607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.083 [2024-04-24 21:40:47.968632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.339 [2024-04-24 21:40:47.987218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:25.339 [2024-04-24 21:40:47.987707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.339 [2024-04-24 21:40:47.987738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.339 [2024-04-24 21:40:48.005591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:25.339 [2024-04-24 21:40:48.006197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.339 [2024-04-24 21:40:48.006218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.339 [2024-04-24 21:40:48.024646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:25.339 [2024-04-24 21:40:48.025204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.339 [2024-04-24 21:40:48.025226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.339 [2024-04-24 21:40:48.043223] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:25.339 [2024-04-24 21:40:48.043858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.339 [2024-04-24 21:40:48.043880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.339 [2024-04-24 21:40:48.063412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6db080) with pdu=0x2000190fef90 00:25:25.339 [2024-04-24 21:40:48.063986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.339 [2024-04-24 21:40:48.064008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.339 00:25:25.339 Latency(us) 00:25:25.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.339 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:25.339 nvme0n1 : 2.01 1678.13 209.77 0.00 0.00 9512.12 7077.89 35441.87 00:25:25.339 =================================================================================================================== 00:25:25.339 Total : 1678.13 209.77 0.00 0.00 9512.12 7077.89 35441.87 00:25:25.339 0 00:25:25.339 21:40:48 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:25.339 21:40:48 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:25.339 21:40:48 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:25.339 | .driver_specific 00:25:25.339 | .nvme_error 00:25:25.339 | .status_code 00:25:25.339 | .command_transient_transport_error' 00:25:25.339 21:40:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:25.596 21:40:48 -- host/digest.sh@71 -- # (( 108 > 0 )) 00:25:25.596 21:40:48 -- host/digest.sh@73 -- # killprocess 2991958 00:25:25.596 21:40:48 -- common/autotest_common.sh@936 -- # '[' -z 2991958 ']' 00:25:25.596 21:40:48 -- common/autotest_common.sh@940 -- # kill -0 2991958 00:25:25.596 21:40:48 -- common/autotest_common.sh@941 -- # uname 00:25:25.596 21:40:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:25.596 21:40:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2991958 00:25:25.596 21:40:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:25.596 21:40:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:25.596 21:40:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2991958' 00:25:25.596 killing process with pid 2991958 00:25:25.596 21:40:48 -- common/autotest_common.sh@955 -- # kill 2991958 00:25:25.596 Received shutdown signal, test time was about 2.000000 seconds 00:25:25.596 00:25:25.597 Latency(us) 00:25:25.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.597 =================================================================================================================== 00:25:25.597 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:25.597 21:40:48 -- common/autotest_common.sh@960 -- # wait 2991958 00:25:25.854 21:40:48 -- host/digest.sh@116 -- # killprocess 2989825 00:25:25.854 21:40:48 -- common/autotest_common.sh@936 -- # '[' -z 2989825 ']' 00:25:25.854 21:40:48 -- common/autotest_common.sh@940 -- # kill -0 2989825 00:25:25.854 21:40:48 -- common/autotest_common.sh@941 -- # uname 00:25:25.854 21:40:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:25.854 21:40:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2989825 00:25:25.854 21:40:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:25.854 21:40:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:25.854 21:40:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2989825' 00:25:25.854 killing process with pid 2989825 00:25:25.854 21:40:48 -- common/autotest_common.sh@955 -- # kill 2989825 00:25:25.854 21:40:48 -- common/autotest_common.sh@960 -- # wait 2989825 00:25:26.111 00:25:26.111 real 0m16.654s 00:25:26.111 user 0m31.788s 00:25:26.111 sys 0m4.393s 00:25:26.111 21:40:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:26.111 21:40:48 -- common/autotest_common.sh@10 -- # set +x 00:25:26.111 ************************************ 00:25:26.111 END TEST nvmf_digest_error 00:25:26.111 ************************************ 00:25:26.111 21:40:48 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:26.111 21:40:48 -- host/digest.sh@150 -- # nvmftestfini 00:25:26.111 21:40:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:26.111 21:40:48 -- nvmf/common.sh@117 -- # sync 00:25:26.111 21:40:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:26.111 21:40:48 -- nvmf/common.sh@120 -- # set +e 00:25:26.111 21:40:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:26.111 21:40:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:26.111 rmmod nvme_tcp 00:25:26.111 rmmod nvme_fabrics 00:25:26.111 rmmod nvme_keyring 00:25:26.111 21:40:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:26.111 21:40:48 -- nvmf/common.sh@124 -- # set -e 00:25:26.111 21:40:48 -- nvmf/common.sh@125 -- # return 0 00:25:26.111 21:40:48 -- nvmf/common.sh@478 -- # '[' -n 2989825 ']' 00:25:26.111 21:40:48 -- nvmf/common.sh@479 -- # killprocess 2989825 00:25:26.111 21:40:48 -- common/autotest_common.sh@936 -- # '[' -z 2989825 ']' 00:25:26.112 21:40:48 -- common/autotest_common.sh@940 -- # kill -0 2989825 00:25:26.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2989825) - No such process 00:25:26.112 21:40:48 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2989825 is not found' 00:25:26.112 Process with pid 2989825 is not found 00:25:26.112 21:40:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:26.112 21:40:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:26.112 21:40:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:26.112 21:40:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:26.112 21:40:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:26.112 21:40:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.112 21:40:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:26.112 21:40:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.640 21:40:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:28.640 00:25:28.640 real 0m42.851s 00:25:28.640 user 1m6.138s 00:25:28.640 sys 0m14.012s 00:25:28.640 21:40:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:28.640 21:40:50 -- common/autotest_common.sh@10 -- # set +x 00:25:28.640 ************************************ 00:25:28.640 END TEST nvmf_digest 00:25:28.640 ************************************ 00:25:28.640 21:40:50 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:25:28.640 21:40:50 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:25:28.640 21:40:50 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:25:28.640 21:40:50 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:28.640 21:40:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:28.640 21:40:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:28.640 21:40:50 -- common/autotest_common.sh@10 -- # set +x 00:25:28.640 ************************************ 00:25:28.640 START TEST nvmf_bdevperf 00:25:28.640 ************************************ 00:25:28.640 21:40:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:28.640 * Looking for test storage... 00:25:28.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:28.640 21:40:51 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.640 21:40:51 -- nvmf/common.sh@7 -- # uname -s 00:25:28.640 21:40:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.640 21:40:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.640 21:40:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.640 21:40:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.640 21:40:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.640 21:40:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.640 21:40:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.640 21:40:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.640 21:40:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.640 21:40:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.640 21:40:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:28.640 21:40:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:28.640 21:40:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.640 21:40:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.640 21:40:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.640 21:40:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.640 21:40:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.640 21:40:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.640 21:40:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.640 21:40:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.640 21:40:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.640 21:40:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.640 21:40:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.640 21:40:51 -- paths/export.sh@5 -- # export PATH 00:25:28.640 21:40:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.640 21:40:51 -- nvmf/common.sh@47 -- # : 0 00:25:28.640 21:40:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:28.640 21:40:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:28.640 21:40:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.640 21:40:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.640 21:40:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.640 21:40:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:28.640 21:40:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:28.640 21:40:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:28.640 21:40:51 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:28.640 21:40:51 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:28.640 21:40:51 -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:28.640 21:40:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:28.640 21:40:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.640 21:40:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:28.640 21:40:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:28.640 21:40:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:28.640 21:40:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.640 21:40:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.640 21:40:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.640 21:40:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:28.640 21:40:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:28.640 21:40:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:28.640 21:40:51 -- common/autotest_common.sh@10 -- # set +x 00:25:35.190 21:40:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:35.190 21:40:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:35.190 21:40:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:35.190 21:40:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:35.190 21:40:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:35.190 21:40:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:35.190 21:40:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:35.190 21:40:57 -- nvmf/common.sh@295 -- # net_devs=() 00:25:35.190 21:40:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:35.190 21:40:57 -- nvmf/common.sh@296 -- # e810=() 00:25:35.190 21:40:57 -- nvmf/common.sh@296 -- # local -ga e810 00:25:35.190 21:40:57 -- nvmf/common.sh@297 -- # x722=() 00:25:35.190 21:40:57 -- nvmf/common.sh@297 -- # local -ga x722 00:25:35.190 21:40:57 -- nvmf/common.sh@298 -- # mlx=() 00:25:35.190 21:40:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:35.190 21:40:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:35.190 21:40:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:35.190 21:40:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:35.190 21:40:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:35.190 21:40:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:35.190 21:40:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:35.190 21:40:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:35.190 21:40:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:35.190 21:40:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:35.190 21:40:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:35.190 21:40:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:35.190 21:40:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:35.190 21:40:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:35.190 21:40:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:35.190 21:40:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:35.190 21:40:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:35.190 21:40:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:35.190 21:40:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:35.190 21:40:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:35.190 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:35.190 21:40:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:35.190 21:40:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:35.190 21:40:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.190 21:40:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.190 21:40:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:35.190 21:40:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:35.190 21:40:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:35.190 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:35.190 21:40:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:35.190 21:40:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:35.190 21:40:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.190 21:40:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.190 21:40:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:35.190 21:40:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:35.190 21:40:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:35.190 21:40:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:35.190 21:40:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:35.190 21:40:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.190 21:40:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:35.190 21:40:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.190 21:40:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:35.190 Found net devices under 0000:af:00.0: cvl_0_0 00:25:35.190 21:40:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.190 21:40:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:35.190 21:40:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.190 21:40:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:35.190 21:40:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.190 21:40:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:35.190 Found net devices under 0000:af:00.1: cvl_0_1 00:25:35.190 21:40:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.190 21:40:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:35.190 21:40:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:35.190 21:40:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:35.190 21:40:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:35.190 21:40:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:35.190 21:40:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:35.190 21:40:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:35.190 21:40:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:35.190 21:40:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:35.190 21:40:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:35.190 21:40:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:35.190 21:40:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:35.190 21:40:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:35.190 21:40:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:35.190 21:40:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:35.190 21:40:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:35.190 21:40:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:35.190 21:40:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:35.190 21:40:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:35.190 21:40:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:35.190 21:40:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:35.190 21:40:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:35.190 21:40:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:35.190 21:40:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:35.190 21:40:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:35.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:35.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:25:35.190 00:25:35.190 --- 10.0.0.2 ping statistics --- 00:25:35.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.190 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:25:35.190 21:40:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:35.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:35.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:25:35.190 00:25:35.190 --- 10.0.0.1 ping statistics --- 00:25:35.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.190 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:25:35.190 21:40:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:35.190 21:40:57 -- nvmf/common.sh@411 -- # return 0 00:25:35.191 21:40:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:35.191 21:40:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:35.191 21:40:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:35.191 21:40:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:35.191 21:40:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:35.191 21:40:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:35.191 21:40:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:35.191 21:40:57 -- host/bdevperf.sh@25 -- # tgt_init 00:25:35.191 21:40:57 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:35.191 21:40:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:35.191 21:40:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:35.191 21:40:57 -- common/autotest_common.sh@10 -- # set +x 00:25:35.191 21:40:57 -- nvmf/common.sh@470 -- # nvmfpid=2996341 00:25:35.191 21:40:57 -- nvmf/common.sh@471 -- # waitforlisten 2996341 00:25:35.191 21:40:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:35.191 21:40:57 -- common/autotest_common.sh@817 -- # '[' -z 2996341 ']' 00:25:35.191 21:40:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.191 21:40:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:35.191 21:40:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.191 21:40:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:35.191 21:40:57 -- common/autotest_common.sh@10 -- # set +x 00:25:35.191 [2024-04-24 21:40:57.998402] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:25:35.191 [2024-04-24 21:40:57.998455] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.191 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.191 [2024-04-24 21:40:58.071879] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:35.447 [2024-04-24 21:40:58.145053] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.447 [2024-04-24 21:40:58.145092] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.447 [2024-04-24 21:40:58.145102] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:35.447 [2024-04-24 21:40:58.145111] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:35.447 [2024-04-24 21:40:58.145118] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.447 [2024-04-24 21:40:58.145222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.447 [2024-04-24 21:40:58.145309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:35.447 [2024-04-24 21:40:58.145311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.011 21:40:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:36.011 21:40:58 -- common/autotest_common.sh@850 -- # return 0 00:25:36.011 21:40:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:36.011 21:40:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:36.011 21:40:58 -- common/autotest_common.sh@10 -- # set +x 00:25:36.011 21:40:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.011 21:40:58 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:36.011 21:40:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.011 21:40:58 -- common/autotest_common.sh@10 -- # set +x 00:25:36.011 [2024-04-24 21:40:58.852698] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.011 21:40:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.011 21:40:58 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:36.011 21:40:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.011 21:40:58 -- common/autotest_common.sh@10 -- # set +x 00:25:36.268 Malloc0 00:25:36.268 21:40:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.268 21:40:58 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:36.268 21:40:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.268 21:40:58 -- common/autotest_common.sh@10 -- # set +x 00:25:36.268 21:40:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.268 21:40:58 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:36.268 21:40:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.268 21:40:58 -- common/autotest_common.sh@10 -- # set +x 00:25:36.268 21:40:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.268 21:40:58 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:36.268 21:40:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.268 21:40:58 -- common/autotest_common.sh@10 -- # set +x 00:25:36.268 [2024-04-24 21:40:58.923029] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.268 21:40:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.268 21:40:58 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:36.268 21:40:58 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:36.268 21:40:58 -- nvmf/common.sh@521 -- # config=() 00:25:36.268 21:40:58 -- nvmf/common.sh@521 -- # local subsystem config 00:25:36.268 21:40:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:36.268 21:40:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:36.268 { 00:25:36.268 "params": { 00:25:36.268 "name": "Nvme$subsystem", 00:25:36.268 "trtype": "$TEST_TRANSPORT", 00:25:36.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.268 "adrfam": "ipv4", 00:25:36.268 "trsvcid": "$NVMF_PORT", 00:25:36.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.268 "hdgst": ${hdgst:-false}, 00:25:36.268 "ddgst": ${ddgst:-false} 00:25:36.268 }, 00:25:36.268 "method": "bdev_nvme_attach_controller" 00:25:36.268 } 00:25:36.268 EOF 00:25:36.268 )") 00:25:36.268 21:40:58 -- nvmf/common.sh@543 -- # cat 00:25:36.268 21:40:58 -- nvmf/common.sh@545 -- # jq . 00:25:36.268 21:40:58 -- nvmf/common.sh@546 -- # IFS=, 00:25:36.268 21:40:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:36.268 "params": { 00:25:36.268 "name": "Nvme1", 00:25:36.268 "trtype": "tcp", 00:25:36.268 "traddr": "10.0.0.2", 00:25:36.268 "adrfam": "ipv4", 00:25:36.268 "trsvcid": "4420", 00:25:36.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:36.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:36.268 "hdgst": false, 00:25:36.268 "ddgst": false 00:25:36.268 }, 00:25:36.268 "method": "bdev_nvme_attach_controller" 00:25:36.268 }' 00:25:36.268 [2024-04-24 21:40:58.973464] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:25:36.268 [2024-04-24 21:40:58.973510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2996505 ] 00:25:36.268 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.268 [2024-04-24 21:40:59.042893] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.268 [2024-04-24 21:40:59.111288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.525 Running I/O for 1 seconds... 00:25:37.459 00:25:37.459 Latency(us) 00:25:37.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.459 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:37.459 Verification LBA range: start 0x0 length 0x4000 00:25:37.459 Nvme1n1 : 1.01 11596.28 45.30 0.00 0.00 10989.43 2018.51 29569.84 00:25:37.459 =================================================================================================================== 00:25:37.459 Total : 11596.28 45.30 0.00 0.00 10989.43 2018.51 29569.84 00:25:37.716 21:41:00 -- host/bdevperf.sh@30 -- # bdevperfpid=2996775 00:25:37.716 21:41:00 -- host/bdevperf.sh@32 -- # sleep 3 00:25:37.716 21:41:00 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:37.716 21:41:00 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:37.716 21:41:00 -- nvmf/common.sh@521 -- # config=() 00:25:37.716 21:41:00 -- nvmf/common.sh@521 -- # local subsystem config 00:25:37.716 21:41:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:37.716 21:41:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:37.716 { 00:25:37.716 "params": { 00:25:37.716 "name": "Nvme$subsystem", 00:25:37.716 "trtype": "$TEST_TRANSPORT", 00:25:37.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.716 "adrfam": "ipv4", 00:25:37.716 "trsvcid": "$NVMF_PORT", 00:25:37.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.716 "hdgst": ${hdgst:-false}, 00:25:37.716 "ddgst": ${ddgst:-false} 00:25:37.716 }, 00:25:37.716 "method": "bdev_nvme_attach_controller" 00:25:37.716 } 00:25:37.716 EOF 00:25:37.716 )") 00:25:37.716 21:41:00 -- nvmf/common.sh@543 -- # cat 00:25:37.716 21:41:00 -- nvmf/common.sh@545 -- # jq . 00:25:37.716 21:41:00 -- nvmf/common.sh@546 -- # IFS=, 00:25:37.716 21:41:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:37.716 "params": { 00:25:37.716 "name": "Nvme1", 00:25:37.716 "trtype": "tcp", 00:25:37.716 "traddr": "10.0.0.2", 00:25:37.716 "adrfam": "ipv4", 00:25:37.716 "trsvcid": "4420", 00:25:37.716 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:37.716 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:37.716 "hdgst": false, 00:25:37.716 "ddgst": false 00:25:37.716 }, 00:25:37.716 "method": "bdev_nvme_attach_controller" 00:25:37.716 }' 00:25:37.716 [2024-04-24 21:41:00.532900] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:25:37.716 [2024-04-24 21:41:00.532951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2996775 ] 00:25:37.716 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.716 [2024-04-24 21:41:00.603033] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.973 [2024-04-24 21:41:00.668896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.231 Running I/O for 15 seconds... 00:25:40.766 21:41:03 -- host/bdevperf.sh@33 -- # kill -9 2996341 00:25:40.766 21:41:03 -- host/bdevperf.sh@35 -- # sleep 3 00:25:40.766 [2024-04-24 21:41:03.502283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.766 [2024-04-24 21:41:03.502326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.766 [2024-04-24 21:41:03.502346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.766 [2024-04-24 21:41:03.502358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.766 [2024-04-24 21:41:03.502371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.766 [2024-04-24 21:41:03.502382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.766 [2024-04-24 21:41:03.502394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.766 [2024-04-24 21:41:03.502405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.766 [2024-04-24 21:41:03.502417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.766 [2024-04-24 21:41:03.502428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.766 [2024-04-24 21:41:03.502440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.502984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.502995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.767 [2024-04-24 21:41:03.503639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.767 [2024-04-24 21:41:03.503649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.503658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.503670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.503679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.503690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.503700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.503710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.503719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.503729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.503741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.503752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.503761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.503772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.503781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.503791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.503801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.503811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.503820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.503831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.503840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.503850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.503860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.503871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.503880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.503891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.503900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.503910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.503920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.503931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.503940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.503951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.503960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.503970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.503980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.503992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.768 [2024-04-24 21:41:03.504628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.768 [2024-04-24 21:41:03.504639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.504648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.504659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.504669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.504679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.504688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.504699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.504708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.504719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.504728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.504739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.504749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.504761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.504770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.504781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.504790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.504801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.504810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.504821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.504830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.504843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.504852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.504862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.504872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.504883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.504892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.504903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.504913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.504923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.504934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.504945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.504954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.504967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.504976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.504988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.504997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.505010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.505019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.505029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.505040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.505051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.505060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.505071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.505080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.505090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.505099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.505110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.769 [2024-04-24 21:41:03.505120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.505130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208b080 is same with the state(5) to be set 00:25:40.769 [2024-04-24 21:41:03.505142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:40.769 [2024-04-24 21:41:03.505149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:40.769 [2024-04-24 21:41:03.505158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107096 len:8 PRP1 0x0 PRP2 0x0 00:25:40.769 [2024-04-24 21:41:03.505168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.505215] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x208b080 was disconnected and freed. reset controller. 00:25:40.769 [2024-04-24 21:41:03.505263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.769 [2024-04-24 21:41:03.505275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.505285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.769 [2024-04-24 21:41:03.505294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.505304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.769 [2024-04-24 21:41:03.505313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.505323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.769 [2024-04-24 21:41:03.505332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.769 [2024-04-24 21:41:03.505343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:40.769 [2024-04-24 21:41:03.508019] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.769 [2024-04-24 21:41:03.508046] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:40.769 [2024-04-24 21:41:03.508901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-04-24 21:41:03.509411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-04-24 21:41:03.509424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:40.769 [2024-04-24 21:41:03.509434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:40.769 [2024-04-24 21:41:03.509610] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:40.769 [2024-04-24 21:41:03.509780] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:40.769 [2024-04-24 21:41:03.509791] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:40.769 [2024-04-24 21:41:03.509802] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.769 [2024-04-24 21:41:03.512468] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.769 [2024-04-24 21:41:03.521026] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.769 [2024-04-24 21:41:03.521716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-04-24 21:41:03.522281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-04-24 21:41:03.522321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:40.769 [2024-04-24 21:41:03.522355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:40.769 [2024-04-24 21:41:03.522799] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:40.769 [2024-04-24 21:41:03.522966] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:40.769 [2024-04-24 21:41:03.522978] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:40.769 [2024-04-24 21:41:03.522988] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.769 [2024-04-24 21:41:03.525478] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.769 [2024-04-24 21:41:03.533724] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.769 [2024-04-24 21:41:03.534326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-04-24 21:41:03.534870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-04-24 21:41:03.534913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:40.769 [2024-04-24 21:41:03.534946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:40.769 [2024-04-24 21:41:03.535548] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:40.769 [2024-04-24 21:41:03.535995] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:40.769 [2024-04-24 21:41:03.536006] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:40.769 [2024-04-24 21:41:03.536014] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.769 [2024-04-24 21:41:03.538466] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.769 [2024-04-24 21:41:03.546424] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.769 [2024-04-24 21:41:03.547113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-04-24 21:41:03.547672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-04-24 21:41:03.547714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:40.769 [2024-04-24 21:41:03.547747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:40.769 [2024-04-24 21:41:03.548235] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:40.769 [2024-04-24 21:41:03.548472] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:40.769 [2024-04-24 21:41:03.548487] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:40.769 [2024-04-24 21:41:03.548500] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.769 [2024-04-24 21:41:03.552217] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.769 [2024-04-24 21:41:03.559881] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.769 [2024-04-24 21:41:03.560557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-04-24 21:41:03.561112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-04-24 21:41:03.561152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:40.770 [2024-04-24 21:41:03.561184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:40.770 [2024-04-24 21:41:03.561392] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:40.770 [2024-04-24 21:41:03.561573] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:40.770 [2024-04-24 21:41:03.561585] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:40.770 [2024-04-24 21:41:03.561594] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.770 [2024-04-24 21:41:03.564099] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.770 [2024-04-24 21:41:03.572543] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.770 [2024-04-24 21:41:03.573214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-04-24 21:41:03.573783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-04-24 21:41:03.573826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:40.770 [2024-04-24 21:41:03.573859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:40.770 [2024-04-24 21:41:03.574323] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:40.770 [2024-04-24 21:41:03.574494] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:40.770 [2024-04-24 21:41:03.574505] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:40.770 [2024-04-24 21:41:03.574514] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.770 [2024-04-24 21:41:03.576965] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.770 [2024-04-24 21:41:03.585221] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.770 [2024-04-24 21:41:03.585859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-04-24 21:41:03.586337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-04-24 21:41:03.586349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:40.770 [2024-04-24 21:41:03.586358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:40.770 [2024-04-24 21:41:03.586539] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:40.770 [2024-04-24 21:41:03.586704] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:40.770 [2024-04-24 21:41:03.586715] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:40.770 [2024-04-24 21:41:03.586724] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.770 [2024-04-24 21:41:03.589220] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.770 [2024-04-24 21:41:03.597944] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.770 [2024-04-24 21:41:03.598611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-04-24 21:41:03.599148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-04-24 21:41:03.599187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:40.770 [2024-04-24 21:41:03.599218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:40.770 [2024-04-24 21:41:03.599461] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:40.770 [2024-04-24 21:41:03.599697] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:40.770 [2024-04-24 21:41:03.599712] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:40.770 [2024-04-24 21:41:03.599724] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.770 [2024-04-24 21:41:03.603437] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.770 [2024-04-24 21:41:03.611299] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.770 [2024-04-24 21:41:03.611977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-04-24 21:41:03.612438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-04-24 21:41:03.612495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:40.770 [2024-04-24 21:41:03.612527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:40.770 [2024-04-24 21:41:03.613112] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:40.770 [2024-04-24 21:41:03.613328] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:40.770 [2024-04-24 21:41:03.613339] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:40.770 [2024-04-24 21:41:03.613348] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.770 [2024-04-24 21:41:03.615872] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.770 [2024-04-24 21:41:03.624040] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.770 [2024-04-24 21:41:03.624712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-04-24 21:41:03.625250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-04-24 21:41:03.625289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:40.770 [2024-04-24 21:41:03.625321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:40.770 [2024-04-24 21:41:03.625922] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:40.770 [2024-04-24 21:41:03.626313] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:40.770 [2024-04-24 21:41:03.626324] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:40.770 [2024-04-24 21:41:03.626333] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.770 [2024-04-24 21:41:03.628799] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.770 [2024-04-24 21:41:03.636785] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.770 [2024-04-24 21:41:03.637438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-04-24 21:41:03.637879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-04-24 21:41:03.637920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:40.770 [2024-04-24 21:41:03.637952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:40.770 [2024-04-24 21:41:03.638363] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:40.770 [2024-04-24 21:41:03.638531] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:40.770 [2024-04-24 21:41:03.638542] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:40.770 [2024-04-24 21:41:03.638550] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.770 [2024-04-24 21:41:03.640991] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.770 [2024-04-24 21:41:03.649746] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.770 [2024-04-24 21:41:03.650416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-04-24 21:41:03.650798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-04-24 21:41:03.650814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:40.770 [2024-04-24 21:41:03.650824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:40.770 [2024-04-24 21:41:03.650996] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.034 [2024-04-24 21:41:03.651167] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.034 [2024-04-24 21:41:03.651179] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.034 [2024-04-24 21:41:03.651189] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.034 [2024-04-24 21:41:03.653878] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.034 [2024-04-24 21:41:03.662615] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.035 [2024-04-24 21:41:03.663231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.663749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.663800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.035 [2024-04-24 21:41:03.663835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.035 [2024-04-24 21:41:03.664326] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.035 [2024-04-24 21:41:03.664489] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.035 [2024-04-24 21:41:03.664501] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.035 [2024-04-24 21:41:03.664510] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.035 [2024-04-24 21:41:03.666991] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.035 [2024-04-24 21:41:03.675484] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.035 [2024-04-24 21:41:03.676187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.676747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.676789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.035 [2024-04-24 21:41:03.676822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.035 [2024-04-24 21:41:03.677187] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.035 [2024-04-24 21:41:03.677345] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.035 [2024-04-24 21:41:03.677356] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.035 [2024-04-24 21:41:03.677364] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.035 [2024-04-24 21:41:03.679892] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.035 [2024-04-24 21:41:03.688272] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.035 [2024-04-24 21:41:03.688926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.689397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.689437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.035 [2024-04-24 21:41:03.689484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.035 [2024-04-24 21:41:03.690070] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.035 [2024-04-24 21:41:03.690352] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.035 [2024-04-24 21:41:03.690367] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.035 [2024-04-24 21:41:03.690380] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.035 [2024-04-24 21:41:03.694085] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.035 [2024-04-24 21:41:03.701695] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.035 [2024-04-24 21:41:03.702365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.702872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.702913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.035 [2024-04-24 21:41:03.702954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.035 [2024-04-24 21:41:03.703274] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.035 [2024-04-24 21:41:03.703440] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.035 [2024-04-24 21:41:03.703456] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.035 [2024-04-24 21:41:03.703466] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.035 [2024-04-24 21:41:03.705925] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.035 [2024-04-24 21:41:03.714458] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.035 [2024-04-24 21:41:03.715113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.715594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.715606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.035 [2024-04-24 21:41:03.715615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.035 [2024-04-24 21:41:03.715770] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.035 [2024-04-24 21:41:03.715927] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.035 [2024-04-24 21:41:03.715936] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.035 [2024-04-24 21:41:03.715946] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.035 [2024-04-24 21:41:03.718415] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.035 [2024-04-24 21:41:03.727225] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.035 [2024-04-24 21:41:03.727828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.728343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.728383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.035 [2024-04-24 21:41:03.728415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.035 [2024-04-24 21:41:03.729017] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.035 [2024-04-24 21:41:03.729489] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.035 [2024-04-24 21:41:03.729500] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.035 [2024-04-24 21:41:03.729510] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.035 [2024-04-24 21:41:03.731960] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.035 [2024-04-24 21:41:03.739914] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.035 [2024-04-24 21:41:03.740510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.740962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.741001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.035 [2024-04-24 21:41:03.741033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.035 [2024-04-24 21:41:03.741488] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.035 [2024-04-24 21:41:03.741669] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.035 [2024-04-24 21:41:03.741680] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.035 [2024-04-24 21:41:03.741689] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.035 [2024-04-24 21:41:03.744185] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.035 [2024-04-24 21:41:03.752615] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.035 [2024-04-24 21:41:03.753262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.753785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.753799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.035 [2024-04-24 21:41:03.753809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.035 [2024-04-24 21:41:03.753966] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.035 [2024-04-24 21:41:03.754123] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.035 [2024-04-24 21:41:03.754135] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.035 [2024-04-24 21:41:03.754143] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.035 [2024-04-24 21:41:03.756762] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.035 [2024-04-24 21:41:03.765415] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.035 [2024-04-24 21:41:03.766115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.766638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.766653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.035 [2024-04-24 21:41:03.766662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.035 [2024-04-24 21:41:03.766829] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.035 [2024-04-24 21:41:03.766995] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.035 [2024-04-24 21:41:03.767007] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.035 [2024-04-24 21:41:03.767016] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.035 [2024-04-24 21:41:03.769697] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.035 [2024-04-24 21:41:03.778354] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.035 [2024-04-24 21:41:03.778921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.779486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.779528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.035 [2024-04-24 21:41:03.779562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.035 [2024-04-24 21:41:03.780148] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.035 [2024-04-24 21:41:03.780753] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.035 [2024-04-24 21:41:03.780787] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.035 [2024-04-24 21:41:03.780796] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.035 [2024-04-24 21:41:03.784253] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.035 [2024-04-24 21:41:03.792063] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.035 [2024-04-24 21:41:03.792729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.793199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.035 [2024-04-24 21:41:03.793240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.035 [2024-04-24 21:41:03.793273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.035 [2024-04-24 21:41:03.793697] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.035 [2024-04-24 21:41:03.793854] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.035 [2024-04-24 21:41:03.793865] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.035 [2024-04-24 21:41:03.793875] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.035 [2024-04-24 21:41:03.796437] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.035 [2024-04-24 21:41:03.804847] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.035 [2024-04-24 21:41:03.805443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-04-24 21:41:03.805978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-04-24 21:41:03.806018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.036 [2024-04-24 21:41:03.806051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.036 [2024-04-24 21:41:03.806650] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.036 [2024-04-24 21:41:03.806808] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.036 [2024-04-24 21:41:03.806819] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.036 [2024-04-24 21:41:03.806828] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.036 [2024-04-24 21:41:03.809350] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.036 [2024-04-24 21:41:03.817614] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.036 [2024-04-24 21:41:03.818214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-04-24 21:41:03.818726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-04-24 21:41:03.818768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.036 [2024-04-24 21:41:03.818801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.036 [2024-04-24 21:41:03.819386] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.036 [2024-04-24 21:41:03.819917] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.036 [2024-04-24 21:41:03.819932] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.036 [2024-04-24 21:41:03.819941] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.036 [2024-04-24 21:41:03.822426] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.036 [2024-04-24 21:41:03.830386] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.036 [2024-04-24 21:41:03.831063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-04-24 21:41:03.831629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-04-24 21:41:03.831671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.036 [2024-04-24 21:41:03.831714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.036 [2024-04-24 21:41:03.831950] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.036 [2024-04-24 21:41:03.832185] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.036 [2024-04-24 21:41:03.832201] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.036 [2024-04-24 21:41:03.832213] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.036 [2024-04-24 21:41:03.835928] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.036 [2024-04-24 21:41:03.843579] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.036 [2024-04-24 21:41:03.844247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-04-24 21:41:03.844762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-04-24 21:41:03.844804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.036 [2024-04-24 21:41:03.844837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.036 [2024-04-24 21:41:03.845422] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.036 [2024-04-24 21:41:03.845640] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.036 [2024-04-24 21:41:03.845651] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.036 [2024-04-24 21:41:03.845660] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.036 [2024-04-24 21:41:03.848169] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.036 [2024-04-24 21:41:03.856348] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.036 [2024-04-24 21:41:03.857005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-04-24 21:41:03.857518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-04-24 21:41:03.857562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.036 [2024-04-24 21:41:03.857595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.036 [2024-04-24 21:41:03.858018] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.036 [2024-04-24 21:41:03.858174] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.036 [2024-04-24 21:41:03.858185] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.036 [2024-04-24 21:41:03.858197] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.036 [2024-04-24 21:41:03.860645] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.036 [2024-04-24 21:41:03.869023] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.036 [2024-04-24 21:41:03.869694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-04-24 21:41:03.870250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-04-24 21:41:03.870291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.036 [2024-04-24 21:41:03.870323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.036 [2024-04-24 21:41:03.870708] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.036 [2024-04-24 21:41:03.870881] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.036 [2024-04-24 21:41:03.870893] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.036 [2024-04-24 21:41:03.870902] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.036 [2024-04-24 21:41:03.873384] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.036 [2024-04-24 21:41:03.881688] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.036 [2024-04-24 21:41:03.882359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-04-24 21:41:03.882894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-04-24 21:41:03.882939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.036 [2024-04-24 21:41:03.882973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.036 [2024-04-24 21:41:03.883572] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.036 [2024-04-24 21:41:03.883964] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.036 [2024-04-24 21:41:03.883975] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.036 [2024-04-24 21:41:03.883984] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.036 [2024-04-24 21:41:03.886426] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.036 [2024-04-24 21:41:03.894577] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.036 [2024-04-24 21:41:03.895251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-04-24 21:41:03.895729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-04-24 21:41:03.895771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.036 [2024-04-24 21:41:03.895811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.036 [2024-04-24 21:41:03.895980] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.036 [2024-04-24 21:41:03.896150] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.036 [2024-04-24 21:41:03.896161] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.036 [2024-04-24 21:41:03.896170] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.036 [2024-04-24 21:41:03.898748] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.036 [2024-04-24 21:41:03.907232] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.036 [2024-04-24 21:41:03.907891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-04-24 21:41:03.908369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.036 [2024-04-24 21:41:03.908409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.036 [2024-04-24 21:41:03.908441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.036 [2024-04-24 21:41:03.908844] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.036 [2024-04-24 21:41:03.909011] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.036 [2024-04-24 21:41:03.909022] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.036 [2024-04-24 21:41:03.909032] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.036 [2024-04-24 21:41:03.911511] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.295 [2024-04-24 21:41:03.920181] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.295 [2024-04-24 21:41:03.920872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.295 [2024-04-24 21:41:03.921320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.295 [2024-04-24 21:41:03.921334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.295 [2024-04-24 21:41:03.921344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.295 [2024-04-24 21:41:03.921518] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.295 [2024-04-24 21:41:03.921684] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.295 [2024-04-24 21:41:03.921696] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.295 [2024-04-24 21:41:03.921705] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.295 [2024-04-24 21:41:03.924206] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.295 [2024-04-24 21:41:03.932922] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.295 [2024-04-24 21:41:03.933527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.295 [2024-04-24 21:41:03.934062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.295 [2024-04-24 21:41:03.934102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.295 [2024-04-24 21:41:03.934136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.295 [2024-04-24 21:41:03.934740] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.295 [2024-04-24 21:41:03.935151] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.295 [2024-04-24 21:41:03.935162] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.295 [2024-04-24 21:41:03.935170] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.295 [2024-04-24 21:41:03.937645] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.295 [2024-04-24 21:41:03.945658] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.295 [2024-04-24 21:41:03.946256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.295 [2024-04-24 21:41:03.946791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.295 [2024-04-24 21:41:03.946835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.295 [2024-04-24 21:41:03.946868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.295 [2024-04-24 21:41:03.947466] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.295 [2024-04-24 21:41:03.948062] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.295 [2024-04-24 21:41:03.948072] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.296 [2024-04-24 21:41:03.948081] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.296 [2024-04-24 21:41:03.950526] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.296 [2024-04-24 21:41:03.958461] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.296 [2024-04-24 21:41:03.958996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.296 [2024-04-24 21:41:03.959475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.296 [2024-04-24 21:41:03.959516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.296 [2024-04-24 21:41:03.959550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.296 [2024-04-24 21:41:03.959908] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.296 [2024-04-24 21:41:03.960073] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.296 [2024-04-24 21:41:03.960085] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.296 [2024-04-24 21:41:03.960094] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.296 [2024-04-24 21:41:03.962576] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.296 [2024-04-24 21:41:03.971231] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.296 [2024-04-24 21:41:03.971677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.296 [2024-04-24 21:41:03.972152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.296 [2024-04-24 21:41:03.972193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.296 [2024-04-24 21:41:03.972225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.296 [2024-04-24 21:41:03.972825] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.296 [2024-04-24 21:41:03.973302] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.296 [2024-04-24 21:41:03.973317] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.296 [2024-04-24 21:41:03.973329] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.296 [2024-04-24 21:41:03.977043] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.296 [2024-04-24 21:41:03.984591] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.296 [2024-04-24 21:41:03.985185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.296 [2024-04-24 21:41:03.985666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.296 [2024-04-24 21:41:03.985709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.296 [2024-04-24 21:41:03.985742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.296 [2024-04-24 21:41:03.986173] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.296 [2024-04-24 21:41:03.986338] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.296 [2024-04-24 21:41:03.986350] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.296 [2024-04-24 21:41:03.986359] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.296 [2024-04-24 21:41:03.988957] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.296 [2024-04-24 21:41:03.997240] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.296 [2024-04-24 21:41:03.997905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.296 [2024-04-24 21:41:03.998414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.296 [2024-04-24 21:41:03.998468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.296 [2024-04-24 21:41:03.998503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.296 [2024-04-24 21:41:03.999088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.296 [2024-04-24 21:41:03.999395] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.296 [2024-04-24 21:41:03.999406] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.296 [2024-04-24 21:41:03.999415] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.296 [2024-04-24 21:41:04.001876] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.296 [2024-04-24 21:41:04.009952] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.296 [2024-04-24 21:41:04.010603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.296 [2024-04-24 21:41:04.011084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.296 [2024-04-24 21:41:04.011124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.296 [2024-04-24 21:41:04.011156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.296 [2024-04-24 21:41:04.011392] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.296 [2024-04-24 21:41:04.011574] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.296 [2024-04-24 21:41:04.011585] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.296 [2024-04-24 21:41:04.011594] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.296 [2024-04-24 21:41:04.014269] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.296 [2024-04-24 21:41:04.022910] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.296 [2024-04-24 21:41:04.023589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.296 [2024-04-24 21:41:04.024123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.296 [2024-04-24 21:41:04.024171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.296 [2024-04-24 21:41:04.024203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.296 [2024-04-24 21:41:04.024804] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.296 [2024-04-24 21:41:04.025301] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.296 [2024-04-24 21:41:04.025312] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.296 [2024-04-24 21:41:04.025320] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.296 [2024-04-24 21:41:04.027844] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.296 [2024-04-24 21:41:04.035648] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.296 [2024-04-24 21:41:04.036285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.296 [2024-04-24 21:41:04.036820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.296 [2024-04-24 21:41:04.036861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.296 [2024-04-24 21:41:04.036894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.296 [2024-04-24 21:41:04.037110] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.296 [2024-04-24 21:41:04.037268] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.296 [2024-04-24 21:41:04.037278] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.296 [2024-04-24 21:41:04.037287] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.296 [2024-04-24 21:41:04.039823] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.296 [2024-04-24 21:41:04.048413] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.296 [2024-04-24 21:41:04.049085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.296 [2024-04-24 21:41:04.049622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.296 [2024-04-24 21:41:04.049664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.296 [2024-04-24 21:41:04.049697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.296 [2024-04-24 21:41:04.050062] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.296 [2024-04-24 21:41:04.050219] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.296 [2024-04-24 21:41:04.050229] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.296 [2024-04-24 21:41:04.050238] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.296 [2024-04-24 21:41:04.052679] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.296 [2024-04-24 21:41:04.061041] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.296 [2024-04-24 21:41:04.061706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.296 [2024-04-24 21:41:04.062237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.296 [2024-04-24 21:41:04.062276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.296 [2024-04-24 21:41:04.062307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.296 [2024-04-24 21:41:04.062469] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.296 [2024-04-24 21:41:04.062650] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.297 [2024-04-24 21:41:04.062661] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.297 [2024-04-24 21:41:04.062671] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.297 [2024-04-24 21:41:04.065170] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.297 [2024-04-24 21:41:04.073789] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.297 [2024-04-24 21:41:04.074438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.297 [2024-04-24 21:41:04.074939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.297 [2024-04-24 21:41:04.074980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.297 [2024-04-24 21:41:04.075013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.297 [2024-04-24 21:41:04.075397] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.297 [2024-04-24 21:41:04.075560] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.297 [2024-04-24 21:41:04.075571] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.297 [2024-04-24 21:41:04.075580] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.297 [2024-04-24 21:41:04.078063] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.297 [2024-04-24 21:41:04.086607] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.297 [2024-04-24 21:41:04.087251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.297 [2024-04-24 21:41:04.087702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.297 [2024-04-24 21:41:04.087716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.297 [2024-04-24 21:41:04.087726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.297 [2024-04-24 21:41:04.087884] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.297 [2024-04-24 21:41:04.088040] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.297 [2024-04-24 21:41:04.088051] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.297 [2024-04-24 21:41:04.088059] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.297 [2024-04-24 21:41:04.090574] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.297 [2024-04-24 21:41:04.099243] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.297 [2024-04-24 21:41:04.099884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.297 [2024-04-24 21:41:04.100399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.297 [2024-04-24 21:41:04.100438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.297 [2024-04-24 21:41:04.100485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.297 [2024-04-24 21:41:04.101080] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.297 [2024-04-24 21:41:04.101585] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.297 [2024-04-24 21:41:04.101596] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.297 [2024-04-24 21:41:04.101606] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.297 [2024-04-24 21:41:04.104091] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.297 [2024-04-24 21:41:04.111966] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.297 [2024-04-24 21:41:04.112357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.297 [2024-04-24 21:41:04.112758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.297 [2024-04-24 21:41:04.112771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.297 [2024-04-24 21:41:04.112781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.297 [2024-04-24 21:41:04.112938] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.297 [2024-04-24 21:41:04.113093] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.297 [2024-04-24 21:41:04.113104] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.297 [2024-04-24 21:41:04.113113] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.297 [2024-04-24 21:41:04.115578] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.297 [2024-04-24 21:41:04.124813] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.297 [2024-04-24 21:41:04.125423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.297 [2024-04-24 21:41:04.125834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.297 [2024-04-24 21:41:04.125876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.297 [2024-04-24 21:41:04.125908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.297 [2024-04-24 21:41:04.126510] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.297 [2024-04-24 21:41:04.126789] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.297 [2024-04-24 21:41:04.126801] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.297 [2024-04-24 21:41:04.126810] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.297 [2024-04-24 21:41:04.129333] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.297 [2024-04-24 21:41:04.137565] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.297 [2024-04-24 21:41:04.138963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.297 [2024-04-24 21:41:04.139479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.297 [2024-04-24 21:41:04.139526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.297 [2024-04-24 21:41:04.139561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.297 [2024-04-24 21:41:04.139853] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.297 [2024-04-24 21:41:04.140022] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.297 [2024-04-24 21:41:04.140033] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.297 [2024-04-24 21:41:04.140043] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.297 [2024-04-24 21:41:04.142624] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.297 [2024-04-24 21:41:04.150544] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.297 [2024-04-24 21:41:04.151196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.297 [2024-04-24 21:41:04.151608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.297 [2024-04-24 21:41:04.151651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.297 [2024-04-24 21:41:04.151685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.297 [2024-04-24 21:41:04.152274] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.297 [2024-04-24 21:41:04.152524] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.297 [2024-04-24 21:41:04.152536] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.297 [2024-04-24 21:41:04.152545] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.297 [2024-04-24 21:41:04.155220] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.297 [2024-04-24 21:41:04.163458] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.297 [2024-04-24 21:41:04.164032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.297 [2024-04-24 21:41:04.164497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.297 [2024-04-24 21:41:04.164511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.297 [2024-04-24 21:41:04.164521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.297 [2024-04-24 21:41:04.164702] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.297 [2024-04-24 21:41:04.164872] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.297 [2024-04-24 21:41:04.164884] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.297 [2024-04-24 21:41:04.164893] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.297 [2024-04-24 21:41:04.167553] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.297 [2024-04-24 21:41:04.176328] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.297 [2024-04-24 21:41:04.176848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.297 [2024-04-24 21:41:04.177257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.297 [2024-04-24 21:41:04.177273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.297 [2024-04-24 21:41:04.177283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.297 [2024-04-24 21:41:04.177462] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.297 [2024-04-24 21:41:04.177633] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.298 [2024-04-24 21:41:04.177648] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.298 [2024-04-24 21:41:04.177657] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.298 [2024-04-24 21:41:04.180341] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.557 [2024-04-24 21:41:04.189363] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.557 [2024-04-24 21:41:04.189973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.557 [2024-04-24 21:41:04.190368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.557 [2024-04-24 21:41:04.190382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.557 [2024-04-24 21:41:04.190392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.557 [2024-04-24 21:41:04.190568] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.557 [2024-04-24 21:41:04.190738] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.557 [2024-04-24 21:41:04.190750] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.557 [2024-04-24 21:41:04.190759] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.557 [2024-04-24 21:41:04.193406] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.557 [2024-04-24 21:41:04.202261] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.557 [2024-04-24 21:41:04.202780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.557 [2024-04-24 21:41:04.203187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.557 [2024-04-24 21:41:04.203200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.557 [2024-04-24 21:41:04.203209] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.557 [2024-04-24 21:41:04.203375] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.557 [2024-04-24 21:41:04.203546] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.557 [2024-04-24 21:41:04.203558] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.557 [2024-04-24 21:41:04.203567] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.557 [2024-04-24 21:41:04.206143] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.557 [2024-04-24 21:41:04.215160] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.557 [2024-04-24 21:41:04.215846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.557 [2024-04-24 21:41:04.216370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.557 [2024-04-24 21:41:04.216412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.557 [2024-04-24 21:41:04.216445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.557 [2024-04-24 21:41:04.216761] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.557 [2024-04-24 21:41:04.216927] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.557 [2024-04-24 21:41:04.216938] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.557 [2024-04-24 21:41:04.216950] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.557 [2024-04-24 21:41:04.219582] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.557 [2024-04-24 21:41:04.228060] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.557 [2024-04-24 21:41:04.228654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.557 [2024-04-24 21:41:04.229062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.557 [2024-04-24 21:41:04.229102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.557 [2024-04-24 21:41:04.229134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.557 [2024-04-24 21:41:04.229593] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.557 [2024-04-24 21:41:04.229759] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.557 [2024-04-24 21:41:04.229771] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.557 [2024-04-24 21:41:04.229780] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.557 [2024-04-24 21:41:04.232351] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.557 [2024-04-24 21:41:04.240876] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.557 [2024-04-24 21:41:04.241481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.557 [2024-04-24 21:41:04.242014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.557 [2024-04-24 21:41:04.242054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.557 [2024-04-24 21:41:04.242086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.557 [2024-04-24 21:41:04.242510] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.557 [2024-04-24 21:41:04.242677] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.557 [2024-04-24 21:41:04.242688] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.557 [2024-04-24 21:41:04.242698] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.557 [2024-04-24 21:41:04.245274] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.557 [2024-04-24 21:41:04.253791] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.557 [2024-04-24 21:41:04.254391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.557 [2024-04-24 21:41:04.254753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.557 [2024-04-24 21:41:04.254768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.557 [2024-04-24 21:41:04.254778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.558 [2024-04-24 21:41:04.254945] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.558 [2024-04-24 21:41:04.255110] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.558 [2024-04-24 21:41:04.255122] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.558 [2024-04-24 21:41:04.255131] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.558 [2024-04-24 21:41:04.257718] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.558 [2024-04-24 21:41:04.266706] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.558 [2024-04-24 21:41:04.267292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.558 [2024-04-24 21:41:04.267691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.558 [2024-04-24 21:41:04.267734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.558 [2024-04-24 21:41:04.267768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.558 [2024-04-24 21:41:04.267976] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.558 [2024-04-24 21:41:04.268133] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.558 [2024-04-24 21:41:04.268159] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.558 [2024-04-24 21:41:04.268169] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.558 [2024-04-24 21:41:04.270818] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.558 [2024-04-24 21:41:04.279569] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.558 [2024-04-24 21:41:04.280242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.558 [2024-04-24 21:41:04.280677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.558 [2024-04-24 21:41:04.280690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.558 [2024-04-24 21:41:04.280700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.558 [2024-04-24 21:41:04.280870] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.558 [2024-04-24 21:41:04.281040] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.558 [2024-04-24 21:41:04.281052] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.558 [2024-04-24 21:41:04.281061] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.558 [2024-04-24 21:41:04.283719] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.558 [2024-04-24 21:41:04.292503] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.558 [2024-04-24 21:41:04.293154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.558 [2024-04-24 21:41:04.293588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.558 [2024-04-24 21:41:04.293602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.558 [2024-04-24 21:41:04.293612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.558 [2024-04-24 21:41:04.293782] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.558 [2024-04-24 21:41:04.293951] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.558 [2024-04-24 21:41:04.293962] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.558 [2024-04-24 21:41:04.293972] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.558 [2024-04-24 21:41:04.296630] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.558 [2024-04-24 21:41:04.305393] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.558 [2024-04-24 21:41:04.306002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.558 [2024-04-24 21:41:04.306213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.558 [2024-04-24 21:41:04.306226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.558 [2024-04-24 21:41:04.306236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.558 [2024-04-24 21:41:04.306405] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.558 [2024-04-24 21:41:04.306580] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.558 [2024-04-24 21:41:04.306592] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.558 [2024-04-24 21:41:04.306603] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.558 [2024-04-24 21:41:04.309246] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.558 [2024-04-24 21:41:04.318311] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.558 [2024-04-24 21:41:04.318925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.558 [2024-04-24 21:41:04.319405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.558 [2024-04-24 21:41:04.319419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.558 [2024-04-24 21:41:04.319429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.558 [2024-04-24 21:41:04.319604] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.558 [2024-04-24 21:41:04.319775] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.558 [2024-04-24 21:41:04.319787] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.558 [2024-04-24 21:41:04.319796] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.558 [2024-04-24 21:41:04.322441] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.558 [2024-04-24 21:41:04.331222] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.558 [2024-04-24 21:41:04.331559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.558 [2024-04-24 21:41:04.331965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.558 [2024-04-24 21:41:04.331978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.558 [2024-04-24 21:41:04.331988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.558 [2024-04-24 21:41:04.332158] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.558 [2024-04-24 21:41:04.332328] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.558 [2024-04-24 21:41:04.332340] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.558 [2024-04-24 21:41:04.332349] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.558 [2024-04-24 21:41:04.335001] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.558 [2024-04-24 21:41:04.344077] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.558 [2024-04-24 21:41:04.344588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.558 [2024-04-24 21:41:04.345064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.558 [2024-04-24 21:41:04.345078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.558 [2024-04-24 21:41:04.345088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.558 [2024-04-24 21:41:04.345257] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.558 [2024-04-24 21:41:04.345426] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.558 [2024-04-24 21:41:04.345437] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.558 [2024-04-24 21:41:04.345446] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.558 [2024-04-24 21:41:04.348106] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.558 [2024-04-24 21:41:04.357031] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.558 [2024-04-24 21:41:04.357685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.558 [2024-04-24 21:41:04.358161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.558 [2024-04-24 21:41:04.358175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.558 [2024-04-24 21:41:04.358185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.558 [2024-04-24 21:41:04.358354] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.558 [2024-04-24 21:41:04.358530] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.558 [2024-04-24 21:41:04.358542] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.558 [2024-04-24 21:41:04.358552] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.558 [2024-04-24 21:41:04.361201] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.558 [2024-04-24 21:41:04.370180] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.558 [2024-04-24 21:41:04.370872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.558 [2024-04-24 21:41:04.371225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.558 [2024-04-24 21:41:04.371238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.558 [2024-04-24 21:41:04.371249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.558 [2024-04-24 21:41:04.371426] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.558 [2024-04-24 21:41:04.371611] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.558 [2024-04-24 21:41:04.371623] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.558 [2024-04-24 21:41:04.371633] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.558 [2024-04-24 21:41:04.374444] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.558 [2024-04-24 21:41:04.383165] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.558 [2024-04-24 21:41:04.383700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.558 [2024-04-24 21:41:04.384176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.559 [2024-04-24 21:41:04.384189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.559 [2024-04-24 21:41:04.384200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.559 [2024-04-24 21:41:04.384369] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.559 [2024-04-24 21:41:04.384545] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.559 [2024-04-24 21:41:04.384557] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.559 [2024-04-24 21:41:04.384566] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.559 [2024-04-24 21:41:04.387217] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.559 [2024-04-24 21:41:04.396147] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.559 [2024-04-24 21:41:04.396810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.559 [2024-04-24 21:41:04.397208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.559 [2024-04-24 21:41:04.397221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.559 [2024-04-24 21:41:04.397232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.559 [2024-04-24 21:41:04.397400] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.559 [2024-04-24 21:41:04.397573] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.559 [2024-04-24 21:41:04.397585] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.559 [2024-04-24 21:41:04.397595] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.559 [2024-04-24 21:41:04.400239] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.559 [2024-04-24 21:41:04.409008] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.559 [2024-04-24 21:41:04.409672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.559 [2024-04-24 21:41:04.410026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.559 [2024-04-24 21:41:04.410039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.559 [2024-04-24 21:41:04.410049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.559 [2024-04-24 21:41:04.410218] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.559 [2024-04-24 21:41:04.410388] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.559 [2024-04-24 21:41:04.410400] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.559 [2024-04-24 21:41:04.410409] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.559 [2024-04-24 21:41:04.413058] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.559 [2024-04-24 21:41:04.421988] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.559 [2024-04-24 21:41:04.422588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.559 [2024-04-24 21:41:04.423040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.559 [2024-04-24 21:41:04.423053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.559 [2024-04-24 21:41:04.423067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.559 [2024-04-24 21:41:04.423237] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.559 [2024-04-24 21:41:04.423407] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.559 [2024-04-24 21:41:04.423418] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.559 [2024-04-24 21:41:04.423428] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.559 [2024-04-24 21:41:04.426085] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.559 [2024-04-24 21:41:04.434857] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.559 [2024-04-24 21:41:04.435524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.559 [2024-04-24 21:41:04.435947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.559 [2024-04-24 21:41:04.435960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.559 [2024-04-24 21:41:04.435970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.559 [2024-04-24 21:41:04.436140] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.559 [2024-04-24 21:41:04.436311] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.559 [2024-04-24 21:41:04.436323] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.559 [2024-04-24 21:41:04.436332] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.559 [2024-04-24 21:41:04.438998] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.819 [2024-04-24 21:41:04.447864] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.819 [2024-04-24 21:41:04.448525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.819 [2024-04-24 21:41:04.449001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.819 [2024-04-24 21:41:04.449016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.819 [2024-04-24 21:41:04.449027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.819 [2024-04-24 21:41:04.449200] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.819 [2024-04-24 21:41:04.449371] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.819 [2024-04-24 21:41:04.449383] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.819 [2024-04-24 21:41:04.449392] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.819 [2024-04-24 21:41:04.452042] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.819 [2024-04-24 21:41:04.460798] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.819 [2024-04-24 21:41:04.461467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.819 [2024-04-24 21:41:04.461942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.819 [2024-04-24 21:41:04.461955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.819 [2024-04-24 21:41:04.461966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.819 [2024-04-24 21:41:04.462139] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.819 [2024-04-24 21:41:04.462310] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.819 [2024-04-24 21:41:04.462321] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.819 [2024-04-24 21:41:04.462330] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.819 [2024-04-24 21:41:04.465156] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.819 [2024-04-24 21:41:04.473762] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.819 [2024-04-24 21:41:04.474411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.819 [2024-04-24 21:41:04.474886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.819 [2024-04-24 21:41:04.474899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.819 [2024-04-24 21:41:04.474910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.819 [2024-04-24 21:41:04.475080] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.819 [2024-04-24 21:41:04.475250] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.819 [2024-04-24 21:41:04.475261] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.819 [2024-04-24 21:41:04.475270] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.819 [2024-04-24 21:41:04.477925] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.819 [2024-04-24 21:41:04.486695] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.819 [2024-04-24 21:41:04.487365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.819 [2024-04-24 21:41:04.487791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.819 [2024-04-24 21:41:04.487805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.819 [2024-04-24 21:41:04.487815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.819 [2024-04-24 21:41:04.487984] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.819 [2024-04-24 21:41:04.488153] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.819 [2024-04-24 21:41:04.488165] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.819 [2024-04-24 21:41:04.488174] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.819 [2024-04-24 21:41:04.490819] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.819 [2024-04-24 21:41:04.499572] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.819 [2024-04-24 21:41:04.500247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.819 [2024-04-24 21:41:04.500720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.819 [2024-04-24 21:41:04.500734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.819 [2024-04-24 21:41:04.500744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.819 [2024-04-24 21:41:04.500914] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.819 [2024-04-24 21:41:04.501087] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.819 [2024-04-24 21:41:04.501099] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.819 [2024-04-24 21:41:04.501108] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.819 [2024-04-24 21:41:04.503757] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.819 [2024-04-24 21:41:04.512427] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.819 [2024-04-24 21:41:04.513031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.819 [2024-04-24 21:41:04.513505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.819 [2024-04-24 21:41:04.513519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.819 [2024-04-24 21:41:04.513529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.819 [2024-04-24 21:41:04.513699] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.819 [2024-04-24 21:41:04.513869] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.819 [2024-04-24 21:41:04.513881] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.819 [2024-04-24 21:41:04.513890] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.819 [2024-04-24 21:41:04.516536] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.819 [2024-04-24 21:41:04.525372] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.819 [2024-04-24 21:41:04.526079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.819 [2024-04-24 21:41:04.526485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.819 [2024-04-24 21:41:04.526499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.819 [2024-04-24 21:41:04.526510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.819 [2024-04-24 21:41:04.526692] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.819 [2024-04-24 21:41:04.526862] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.819 [2024-04-24 21:41:04.526873] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.819 [2024-04-24 21:41:04.526882] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.819 [2024-04-24 21:41:04.529537] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.819 [2024-04-24 21:41:04.538310] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.819 [2024-04-24 21:41:04.538907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.819 [2024-04-24 21:41:04.539313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.539326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.820 [2024-04-24 21:41:04.539335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.820 [2024-04-24 21:41:04.539518] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.820 [2024-04-24 21:41:04.539689] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.820 [2024-04-24 21:41:04.539703] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.820 [2024-04-24 21:41:04.539713] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.820 [2024-04-24 21:41:04.542366] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.820 [2024-04-24 21:41:04.551294] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.820 [2024-04-24 21:41:04.551965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.552369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.552383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.820 [2024-04-24 21:41:04.552393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.820 [2024-04-24 21:41:04.552568] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.820 [2024-04-24 21:41:04.552739] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.820 [2024-04-24 21:41:04.552750] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.820 [2024-04-24 21:41:04.552760] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.820 [2024-04-24 21:41:04.555407] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.820 [2024-04-24 21:41:04.564161] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.820 [2024-04-24 21:41:04.564848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.565323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.565336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.820 [2024-04-24 21:41:04.565346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.820 [2024-04-24 21:41:04.565520] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.820 [2024-04-24 21:41:04.565690] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.820 [2024-04-24 21:41:04.565701] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.820 [2024-04-24 21:41:04.565710] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.820 [2024-04-24 21:41:04.568384] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.820 [2024-04-24 21:41:04.577070] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.820 [2024-04-24 21:41:04.577720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.578194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.578207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.820 [2024-04-24 21:41:04.578217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.820 [2024-04-24 21:41:04.578387] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.820 [2024-04-24 21:41:04.578561] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.820 [2024-04-24 21:41:04.578573] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.820 [2024-04-24 21:41:04.578585] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.820 [2024-04-24 21:41:04.581388] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.820 [2024-04-24 21:41:04.590110] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.820 [2024-04-24 21:41:04.590802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.591267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.591307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.820 [2024-04-24 21:41:04.591341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.820 [2024-04-24 21:41:04.591600] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.820 [2024-04-24 21:41:04.591789] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.820 [2024-04-24 21:41:04.591800] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.820 [2024-04-24 21:41:04.591810] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.820 [2024-04-24 21:41:04.594512] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.820 [2024-04-24 21:41:04.602964] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.820 [2024-04-24 21:41:04.603658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.603952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.603992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.820 [2024-04-24 21:41:04.604025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.820 [2024-04-24 21:41:04.604636] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.820 [2024-04-24 21:41:04.604806] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.820 [2024-04-24 21:41:04.604817] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.820 [2024-04-24 21:41:04.604827] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.820 [2024-04-24 21:41:04.607474] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.820 [2024-04-24 21:41:04.615796] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.820 [2024-04-24 21:41:04.616478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.616943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.616983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.820 [2024-04-24 21:41:04.617015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.820 [2024-04-24 21:41:04.617458] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.820 [2024-04-24 21:41:04.617630] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.820 [2024-04-24 21:41:04.617641] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.820 [2024-04-24 21:41:04.617649] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.820 [2024-04-24 21:41:04.620089] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.820 [2024-04-24 21:41:04.628429] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.820 [2024-04-24 21:41:04.629094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.629551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.629592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.820 [2024-04-24 21:41:04.629624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.820 [2024-04-24 21:41:04.630212] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.820 [2024-04-24 21:41:04.630471] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.820 [2024-04-24 21:41:04.630482] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.820 [2024-04-24 21:41:04.630490] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.820 [2024-04-24 21:41:04.633900] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.820 [2024-04-24 21:41:04.641992] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.820 [2024-04-24 21:41:04.642657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.643122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.643163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.820 [2024-04-24 21:41:04.643195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.820 [2024-04-24 21:41:04.643593] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.820 [2024-04-24 21:41:04.643751] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.820 [2024-04-24 21:41:04.643762] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.820 [2024-04-24 21:41:04.643770] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.820 [2024-04-24 21:41:04.646208] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.820 [2024-04-24 21:41:04.654717] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.820 [2024-04-24 21:41:04.655288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.655764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.655808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.820 [2024-04-24 21:41:04.655840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.820 [2024-04-24 21:41:04.656427] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.820 [2024-04-24 21:41:04.656859] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.820 [2024-04-24 21:41:04.656870] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.820 [2024-04-24 21:41:04.656878] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.820 [2024-04-24 21:41:04.659320] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.820 [2024-04-24 21:41:04.667396] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.820 [2024-04-24 21:41:04.668035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.668518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.668559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.820 [2024-04-24 21:41:04.668591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.820 [2024-04-24 21:41:04.668834] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.820 [2024-04-24 21:41:04.668991] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.820 [2024-04-24 21:41:04.669002] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.820 [2024-04-24 21:41:04.669011] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.820 [2024-04-24 21:41:04.671456] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.820 [2024-04-24 21:41:04.680110] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.820 [2024-04-24 21:41:04.680783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.681313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.820 [2024-04-24 21:41:04.681353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.820 [2024-04-24 21:41:04.681386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.821 [2024-04-24 21:41:04.681878] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.821 [2024-04-24 21:41:04.682037] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.821 [2024-04-24 21:41:04.682047] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.821 [2024-04-24 21:41:04.682056] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.821 [2024-04-24 21:41:04.684502] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.821 [2024-04-24 21:41:04.692861] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.821 [2024-04-24 21:41:04.693496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.821 [2024-04-24 21:41:04.693972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.821 [2024-04-24 21:41:04.694012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:41.821 [2024-04-24 21:41:04.694044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:41.821 [2024-04-24 21:41:04.694648] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:41.821 [2024-04-24 21:41:04.694972] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.821 [2024-04-24 21:41:04.694982] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.821 [2024-04-24 21:41:04.694992] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.821 [2024-04-24 21:41:04.697428] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.081 [2024-04-24 21:41:04.705792] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.081 [2024-04-24 21:41:04.706480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.706997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.707038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.081 [2024-04-24 21:41:04.707071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.081 [2024-04-24 21:41:04.707671] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.081 [2024-04-24 21:41:04.707830] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.081 [2024-04-24 21:41:04.707840] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.081 [2024-04-24 21:41:04.707849] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.081 [2024-04-24 21:41:04.710429] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.081 [2024-04-24 21:41:04.718448] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.081 [2024-04-24 21:41:04.719129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.719597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.719640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.081 [2024-04-24 21:41:04.719672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.081 [2024-04-24 21:41:04.719848] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.081 [2024-04-24 21:41:04.720005] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.081 [2024-04-24 21:41:04.720015] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.081 [2024-04-24 21:41:04.720024] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.081 [2024-04-24 21:41:04.722461] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.081 [2024-04-24 21:41:04.731112] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.081 [2024-04-24 21:41:04.731780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.732244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.732284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.081 [2024-04-24 21:41:04.732316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.081 [2024-04-24 21:41:04.732716] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.081 [2024-04-24 21:41:04.732874] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.081 [2024-04-24 21:41:04.732885] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.081 [2024-04-24 21:41:04.732894] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.081 [2024-04-24 21:41:04.735339] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.081 [2024-04-24 21:41:04.743835] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.081 [2024-04-24 21:41:04.744493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.745038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.745079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.081 [2024-04-24 21:41:04.745112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.081 [2024-04-24 21:41:04.745718] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.081 [2024-04-24 21:41:04.746101] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.081 [2024-04-24 21:41:04.746112] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.081 [2024-04-24 21:41:04.746121] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.081 [2024-04-24 21:41:04.748639] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.081 [2024-04-24 21:41:04.756553] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.081 [2024-04-24 21:41:04.757209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.757744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.757786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.081 [2024-04-24 21:41:04.757818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.081 [2024-04-24 21:41:04.758404] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.081 [2024-04-24 21:41:04.758970] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.081 [2024-04-24 21:41:04.758981] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.081 [2024-04-24 21:41:04.758989] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.081 [2024-04-24 21:41:04.761522] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.081 [2024-04-24 21:41:04.769302] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.081 [2024-04-24 21:41:04.769894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.770366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.770405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.081 [2024-04-24 21:41:04.770437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.081 [2024-04-24 21:41:04.770839] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.081 [2024-04-24 21:41:04.770996] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.081 [2024-04-24 21:41:04.771008] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.081 [2024-04-24 21:41:04.771016] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.081 [2024-04-24 21:41:04.773457] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.081 [2024-04-24 21:41:04.781942] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.081 [2024-04-24 21:41:04.782588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.783124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.783163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.081 [2024-04-24 21:41:04.783204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.081 [2024-04-24 21:41:04.783759] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.081 [2024-04-24 21:41:04.783943] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.081 [2024-04-24 21:41:04.783954] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.081 [2024-04-24 21:41:04.783963] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.081 [2024-04-24 21:41:04.786643] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.081 [2024-04-24 21:41:04.794746] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.081 [2024-04-24 21:41:04.795408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.795946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.795988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.081 [2024-04-24 21:41:04.796020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.081 [2024-04-24 21:41:04.796283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.081 [2024-04-24 21:41:04.796440] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.081 [2024-04-24 21:41:04.796456] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.081 [2024-04-24 21:41:04.796465] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.081 [2024-04-24 21:41:04.798907] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.081 [2024-04-24 21:41:04.807412] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.081 [2024-04-24 21:41:04.808096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.808566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.808608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.081 [2024-04-24 21:41:04.808641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.081 [2024-04-24 21:41:04.809113] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.081 [2024-04-24 21:41:04.809271] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.081 [2024-04-24 21:41:04.809281] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.081 [2024-04-24 21:41:04.809290] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.081 [2024-04-24 21:41:04.811730] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.081 [2024-04-24 21:41:04.820069] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.081 [2024-04-24 21:41:04.820666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.821130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.081 [2024-04-24 21:41:04.821170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.082 [2024-04-24 21:41:04.821202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.082 [2024-04-24 21:41:04.821811] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.082 [2024-04-24 21:41:04.822365] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.082 [2024-04-24 21:41:04.822376] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.082 [2024-04-24 21:41:04.822384] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.082 [2024-04-24 21:41:04.826073] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.082 [2024-04-24 21:41:04.833647] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.082 [2024-04-24 21:41:04.834311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.834847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.834889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.082 [2024-04-24 21:41:04.834922] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.082 [2024-04-24 21:41:04.835193] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.082 [2024-04-24 21:41:04.835350] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.082 [2024-04-24 21:41:04.835361] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.082 [2024-04-24 21:41:04.835370] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.082 [2024-04-24 21:41:04.837815] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.082 [2024-04-24 21:41:04.846311] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.082 [2024-04-24 21:41:04.846892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.847373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.847412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.082 [2024-04-24 21:41:04.847444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.082 [2024-04-24 21:41:04.847933] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.082 [2024-04-24 21:41:04.848090] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.082 [2024-04-24 21:41:04.848101] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.082 [2024-04-24 21:41:04.848110] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.082 [2024-04-24 21:41:04.850550] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.082 [2024-04-24 21:41:04.859040] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.082 [2024-04-24 21:41:04.859458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.859869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.859909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.082 [2024-04-24 21:41:04.859941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.082 [2024-04-24 21:41:04.860300] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.082 [2024-04-24 21:41:04.860466] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.082 [2024-04-24 21:41:04.860493] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.082 [2024-04-24 21:41:04.860503] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.082 [2024-04-24 21:41:04.863006] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.082 [2024-04-24 21:41:04.871777] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.082 [2024-04-24 21:41:04.872439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.872987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.873027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.082 [2024-04-24 21:41:04.873060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.082 [2024-04-24 21:41:04.873662] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.082 [2024-04-24 21:41:04.874252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.082 [2024-04-24 21:41:04.874285] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.082 [2024-04-24 21:41:04.874318] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.082 [2024-04-24 21:41:04.878030] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.082 [2024-04-24 21:41:04.884886] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.082 [2024-04-24 21:41:04.885552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.886031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.886071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.082 [2024-04-24 21:41:04.886103] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.082 [2024-04-24 21:41:04.886496] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.082 [2024-04-24 21:41:04.886657] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.082 [2024-04-24 21:41:04.886669] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.082 [2024-04-24 21:41:04.886677] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.082 [2024-04-24 21:41:04.889186] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.082 [2024-04-24 21:41:04.897647] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.082 [2024-04-24 21:41:04.898311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.898773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.898815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.082 [2024-04-24 21:41:04.898848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.082 [2024-04-24 21:41:04.899189] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.082 [2024-04-24 21:41:04.899347] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.082 [2024-04-24 21:41:04.899361] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.082 [2024-04-24 21:41:04.899369] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.082 [2024-04-24 21:41:04.901811] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.082 [2024-04-24 21:41:04.910300] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.082 [2024-04-24 21:41:04.910980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.911479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.911521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.082 [2024-04-24 21:41:04.911555] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.082 [2024-04-24 21:41:04.912145] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.082 [2024-04-24 21:41:04.912607] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.082 [2024-04-24 21:41:04.912618] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.082 [2024-04-24 21:41:04.912627] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.082 [2024-04-24 21:41:04.915064] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.082 [2024-04-24 21:41:04.922977] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.082 [2024-04-24 21:41:04.923627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.924161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.924201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.082 [2024-04-24 21:41:04.924233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.082 [2024-04-24 21:41:04.924563] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.082 [2024-04-24 21:41:04.924720] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.082 [2024-04-24 21:41:04.924732] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.082 [2024-04-24 21:41:04.924741] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.082 [2024-04-24 21:41:04.927188] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.082 [2024-04-24 21:41:04.935699] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.082 [2024-04-24 21:41:04.936366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.936893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.936935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.082 [2024-04-24 21:41:04.936968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.082 [2024-04-24 21:41:04.937215] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.082 [2024-04-24 21:41:04.937372] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.082 [2024-04-24 21:41:04.937383] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.082 [2024-04-24 21:41:04.937395] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.082 [2024-04-24 21:41:04.939848] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.082 [2024-04-24 21:41:04.948354] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.082 [2024-04-24 21:41:04.949015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.949500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.082 [2024-04-24 21:41:04.949513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.082 [2024-04-24 21:41:04.949522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.082 [2024-04-24 21:41:04.949678] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.082 [2024-04-24 21:41:04.949834] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.082 [2024-04-24 21:41:04.949844] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.082 [2024-04-24 21:41:04.949853] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.082 [2024-04-24 21:41:04.952297] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.082 [2024-04-24 21:41:04.961100] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.082 [2024-04-24 21:41:04.961773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.083 [2024-04-24 21:41:04.962351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.083 [2024-04-24 21:41:04.962365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.083 [2024-04-24 21:41:04.962375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.083 [2024-04-24 21:41:04.962550] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.083 [2024-04-24 21:41:04.962718] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.083 [2024-04-24 21:41:04.962734] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.083 [2024-04-24 21:41:04.962748] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.083 [2024-04-24 21:41:04.965355] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.342 [2024-04-24 21:41:04.973885] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.342 [2024-04-24 21:41:04.974567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.342 [2024-04-24 21:41:04.975081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.342 [2024-04-24 21:41:04.975121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.342 [2024-04-24 21:41:04.975153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.342 [2024-04-24 21:41:04.975710] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.342 [2024-04-24 21:41:04.975868] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.342 [2024-04-24 21:41:04.975879] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.342 [2024-04-24 21:41:04.975891] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.342 [2024-04-24 21:41:04.978391] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.342 [2024-04-24 21:41:04.986626] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.342 [2024-04-24 21:41:04.987311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.342 [2024-04-24 21:41:04.987783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.342 [2024-04-24 21:41:04.987825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.342 [2024-04-24 21:41:04.987859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.342 [2024-04-24 21:41:04.988447] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.342 [2024-04-24 21:41:04.988697] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.342 [2024-04-24 21:41:04.988709] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.342 [2024-04-24 21:41:04.988719] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.342 [2024-04-24 21:41:04.991203] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.342 [2024-04-24 21:41:04.999284] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.342 [2024-04-24 21:41:04.999950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.342 [2024-04-24 21:41:05.000486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.342 [2024-04-24 21:41:05.000529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.342 [2024-04-24 21:41:05.000561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.342 [2024-04-24 21:41:05.001147] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.342 [2024-04-24 21:41:05.001590] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.342 [2024-04-24 21:41:05.001601] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.342 [2024-04-24 21:41:05.001609] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.342 [2024-04-24 21:41:05.004054] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.342 [2024-04-24 21:41:05.011987] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.342 [2024-04-24 21:41:05.012625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.342 [2024-04-24 21:41:05.013171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.342 [2024-04-24 21:41:05.013211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.342 [2024-04-24 21:41:05.013243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.342 [2024-04-24 21:41:05.013559] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.342 [2024-04-24 21:41:05.013717] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.342 [2024-04-24 21:41:05.013728] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.342 [2024-04-24 21:41:05.013736] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.342 [2024-04-24 21:41:05.016183] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.342 [2024-04-24 21:41:05.024683] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.342 [2024-04-24 21:41:05.025329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.342 [2024-04-24 21:41:05.025886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.342 [2024-04-24 21:41:05.025928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.342 [2024-04-24 21:41:05.025961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.343 [2024-04-24 21:41:05.026560] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.343 [2024-04-24 21:41:05.026889] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.343 [2024-04-24 21:41:05.026900] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.343 [2024-04-24 21:41:05.026908] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.343 [2024-04-24 21:41:05.029343] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.343 [2024-04-24 21:41:05.037401] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.343 [2024-04-24 21:41:05.038091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.343 [2024-04-24 21:41:05.038652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.343 [2024-04-24 21:41:05.038693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.343 [2024-04-24 21:41:05.038727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.343 [2024-04-24 21:41:05.039313] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.343 [2024-04-24 21:41:05.039862] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.343 [2024-04-24 21:41:05.039874] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.343 [2024-04-24 21:41:05.039883] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.343 [2024-04-24 21:41:05.042542] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.343 [2024-04-24 21:41:05.050420] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.343 [2024-04-24 21:41:05.051099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.343 [2024-04-24 21:41:05.051617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.343 [2024-04-24 21:41:05.051658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.343 [2024-04-24 21:41:05.051691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.343 [2024-04-24 21:41:05.052277] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.343 [2024-04-24 21:41:05.052875] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.343 [2024-04-24 21:41:05.052901] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.343 [2024-04-24 21:41:05.052909] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.343 [2024-04-24 21:41:05.055356] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.343 [2024-04-24 21:41:05.063151] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.343 [2024-04-24 21:41:05.063737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.343 [2024-04-24 21:41:05.064186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.343 [2024-04-24 21:41:05.064225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.343 [2024-04-24 21:41:05.064258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.343 [2024-04-24 21:41:05.064859] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.343 [2024-04-24 21:41:05.065051] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.343 [2024-04-24 21:41:05.065062] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.343 [2024-04-24 21:41:05.065070] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.343 [2024-04-24 21:41:05.067514] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.343 [2024-04-24 21:41:05.075870] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.343 [2024-04-24 21:41:05.076543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.343 [2024-04-24 21:41:05.077033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.343 [2024-04-24 21:41:05.077072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.343 [2024-04-24 21:41:05.077104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.343 [2024-04-24 21:41:05.077706] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.343 [2024-04-24 21:41:05.078050] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.343 [2024-04-24 21:41:05.078061] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.343 [2024-04-24 21:41:05.078069] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.343 [2024-04-24 21:41:05.080509] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.343 [2024-04-24 21:41:05.088560] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.343 [2024-04-24 21:41:05.089222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.343 [2024-04-24 21:41:05.089775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.343 [2024-04-24 21:41:05.089817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.343 [2024-04-24 21:41:05.089849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.343 [2024-04-24 21:41:05.090434] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.343 [2024-04-24 21:41:05.090820] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.343 [2024-04-24 21:41:05.090831] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.343 [2024-04-24 21:41:05.090840] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.343 [2024-04-24 21:41:05.093280] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.343 [2024-04-24 21:41:05.101278] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.343 [2024-04-24 21:41:05.101886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.343 [2024-04-24 21:41:05.102373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.343 [2024-04-24 21:41:05.102413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.343 [2024-04-24 21:41:05.102445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.343 [2024-04-24 21:41:05.102808] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.343 [2024-04-24 21:41:05.102965] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.343 [2024-04-24 21:41:05.102976] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.343 [2024-04-24 21:41:05.102985] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.343 [2024-04-24 21:41:05.105421] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.343 [2024-04-24 21:41:05.113908] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.343 [2024-04-24 21:41:05.114571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.343 [2024-04-24 21:41:05.115127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.343 [2024-04-24 21:41:05.115166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.343 [2024-04-24 21:41:05.115199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.343 [2024-04-24 21:41:05.115797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.343 [2024-04-24 21:41:05.116182] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.343 [2024-04-24 21:41:05.116197] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.343 [2024-04-24 21:41:05.116210] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.343 [2024-04-24 21:41:05.119927] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.343 [2024-04-24 21:41:05.127304] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.343 [2024-04-24 21:41:05.127977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.343 [2024-04-24 21:41:05.128511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.343 [2024-04-24 21:41:05.128551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.343 [2024-04-24 21:41:05.128584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.343 [2024-04-24 21:41:05.128985] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.343 [2024-04-24 21:41:05.129144] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.343 [2024-04-24 21:41:05.129154] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.343 [2024-04-24 21:41:05.129163] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.343 [2024-04-24 21:41:05.131604] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.343 [2024-04-24 21:41:05.139959] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.343 [2024-04-24 21:41:05.140593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.343 [2024-04-24 21:41:05.141160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.343 [2024-04-24 21:41:05.141200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.343 [2024-04-24 21:41:05.141241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.343 [2024-04-24 21:41:05.141847] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.343 [2024-04-24 21:41:05.142241] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.344 [2024-04-24 21:41:05.142252] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.344 [2024-04-24 21:41:05.142260] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.344 [2024-04-24 21:41:05.144697] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.344 [2024-04-24 21:41:05.152608] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.344 [2024-04-24 21:41:05.153275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.344 [2024-04-24 21:41:05.153833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.344 [2024-04-24 21:41:05.153875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.344 [2024-04-24 21:41:05.153906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.344 [2024-04-24 21:41:05.154138] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.344 [2024-04-24 21:41:05.154295] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.344 [2024-04-24 21:41:05.154305] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.344 [2024-04-24 21:41:05.154314] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.344 [2024-04-24 21:41:05.156821] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.344 [2024-04-24 21:41:05.165312] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.344 [2024-04-24 21:41:05.165979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.344 [2024-04-24 21:41:05.166494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.344 [2024-04-24 21:41:05.166535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.344 [2024-04-24 21:41:05.166568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.344 [2024-04-24 21:41:05.167104] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.344 [2024-04-24 21:41:05.167261] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.344 [2024-04-24 21:41:05.167273] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.344 [2024-04-24 21:41:05.167281] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.344 [2024-04-24 21:41:05.169722] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.344 [2024-04-24 21:41:05.178067] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.344 [2024-04-24 21:41:05.178715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.344 [2024-04-24 21:41:05.179248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.344 [2024-04-24 21:41:05.179287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.344 [2024-04-24 21:41:05.179319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.344 [2024-04-24 21:41:05.179800] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.344 [2024-04-24 21:41:05.179958] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.344 [2024-04-24 21:41:05.179969] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.344 [2024-04-24 21:41:05.179977] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.344 [2024-04-24 21:41:05.182420] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.344 [2024-04-24 21:41:05.190770] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.344 [2024-04-24 21:41:05.191434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.344 [2024-04-24 21:41:05.191982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.344 [2024-04-24 21:41:05.192022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.344 [2024-04-24 21:41:05.192053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.344 [2024-04-24 21:41:05.192655] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.344 [2024-04-24 21:41:05.193108] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.344 [2024-04-24 21:41:05.193119] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.344 [2024-04-24 21:41:05.193128] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.344 [2024-04-24 21:41:05.195573] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.344 [2024-04-24 21:41:05.203503] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.344 [2024-04-24 21:41:05.204171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.344 [2024-04-24 21:41:05.204627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.344 [2024-04-24 21:41:05.204669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.344 [2024-04-24 21:41:05.204702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.344 [2024-04-24 21:41:05.205155] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.344 [2024-04-24 21:41:05.205311] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.344 [2024-04-24 21:41:05.205322] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.344 [2024-04-24 21:41:05.205330] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.344 [2024-04-24 21:41:05.207776] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.344 [2024-04-24 21:41:05.216141] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.344 [2024-04-24 21:41:05.216785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.344 [2024-04-24 21:41:05.217325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.344 [2024-04-24 21:41:05.217364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.344 [2024-04-24 21:41:05.217397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.344 [2024-04-24 21:41:05.217861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.344 [2024-04-24 21:41:05.218019] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.344 [2024-04-24 21:41:05.218030] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.344 [2024-04-24 21:41:05.218038] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.344 [2024-04-24 21:41:05.220477] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.604 [2024-04-24 21:41:05.229089] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.604 [2024-04-24 21:41:05.229772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.604 [2024-04-24 21:41:05.230274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.604 [2024-04-24 21:41:05.230314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.604 [2024-04-24 21:41:05.230347] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.604 [2024-04-24 21:41:05.230951] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.604 [2024-04-24 21:41:05.231248] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.604 [2024-04-24 21:41:05.231260] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.604 [2024-04-24 21:41:05.231268] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.604 [2024-04-24 21:41:05.233855] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.604 [2024-04-24 21:41:05.241965] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.604 [2024-04-24 21:41:05.242636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.604 [2024-04-24 21:41:05.243041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.604 [2024-04-24 21:41:05.243055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.604 [2024-04-24 21:41:05.243064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.604 [2024-04-24 21:41:05.243231] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.604 [2024-04-24 21:41:05.243397] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.604 [2024-04-24 21:41:05.243408] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.604 [2024-04-24 21:41:05.243417] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.604 [2024-04-24 21:41:05.246076] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.604 [2024-04-24 21:41:05.254843] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.604 [2024-04-24 21:41:05.255488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.604 [2024-04-24 21:41:05.255945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.604 [2024-04-24 21:41:05.255958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.604 [2024-04-24 21:41:05.255968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.604 [2024-04-24 21:41:05.256133] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.604 [2024-04-24 21:41:05.256303] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.604 [2024-04-24 21:41:05.256315] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.604 [2024-04-24 21:41:05.256323] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.604 [2024-04-24 21:41:05.258995] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.604 [2024-04-24 21:41:05.267769] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.604 [2024-04-24 21:41:05.268433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.604 [2024-04-24 21:41:05.268937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.604 [2024-04-24 21:41:05.268951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.604 [2024-04-24 21:41:05.268961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.604 [2024-04-24 21:41:05.269125] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.604 [2024-04-24 21:41:05.269289] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.604 [2024-04-24 21:41:05.269301] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.604 [2024-04-24 21:41:05.269309] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.604 [2024-04-24 21:41:05.271893] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.604 [2024-04-24 21:41:05.280561] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.604 [2024-04-24 21:41:05.281229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.604 [2024-04-24 21:41:05.281700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.604 [2024-04-24 21:41:05.281713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.605 [2024-04-24 21:41:05.281723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.605 [2024-04-24 21:41:05.281888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.605 [2024-04-24 21:41:05.282053] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.605 [2024-04-24 21:41:05.282064] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.605 [2024-04-24 21:41:05.282073] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.605 [2024-04-24 21:41:05.284652] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.605 [2024-04-24 21:41:05.293467] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.605 [2024-04-24 21:41:05.294081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.605 [2024-04-24 21:41:05.294585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.605 [2024-04-24 21:41:05.294629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.605 [2024-04-24 21:41:05.294662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.605 [2024-04-24 21:41:05.295249] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.605 [2024-04-24 21:41:05.295471] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.605 [2024-04-24 21:41:05.295483] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.605 [2024-04-24 21:41:05.295497] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.605 [2024-04-24 21:41:05.298177] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.605 [2024-04-24 21:41:05.306377] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.605 [2024-04-24 21:41:05.306977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.605 [2024-04-24 21:41:05.307448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.605 [2024-04-24 21:41:05.307467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.605 [2024-04-24 21:41:05.307477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.605 [2024-04-24 21:41:05.307647] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.605 [2024-04-24 21:41:05.307816] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.605 [2024-04-24 21:41:05.307828] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.605 [2024-04-24 21:41:05.307837] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.605 [2024-04-24 21:41:05.310494] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.605 [2024-04-24 21:41:05.319245] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.605 [2024-04-24 21:41:05.319935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.605 [2024-04-24 21:41:05.320425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.605 [2024-04-24 21:41:05.320438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.605 [2024-04-24 21:41:05.320448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.605 [2024-04-24 21:41:05.320620] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.605 [2024-04-24 21:41:05.320789] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.605 [2024-04-24 21:41:05.320800] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.605 [2024-04-24 21:41:05.320809] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.605 [2024-04-24 21:41:05.323462] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.605 [2024-04-24 21:41:05.332223] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.605 [2024-04-24 21:41:05.332811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.605 [2024-04-24 21:41:05.333218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.605 [2024-04-24 21:41:05.333231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.605 [2024-04-24 21:41:05.333241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.605 [2024-04-24 21:41:05.333405] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.605 [2024-04-24 21:41:05.333594] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.605 [2024-04-24 21:41:05.333606] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.605 [2024-04-24 21:41:05.333619] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.605 [2024-04-24 21:41:05.336259] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.605 [2024-04-24 21:41:05.345105] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.605 [2024-04-24 21:41:05.345720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.605 [2024-04-24 21:41:05.346196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.605 [2024-04-24 21:41:05.346209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.605 [2024-04-24 21:41:05.346219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.605 [2024-04-24 21:41:05.346383] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.605 [2024-04-24 21:41:05.346552] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.605 [2024-04-24 21:41:05.346564] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.605 [2024-04-24 21:41:05.346573] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.605 [2024-04-24 21:41:05.349151] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.605 [2024-04-24 21:41:05.358044] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.605 [2024-04-24 21:41:05.358652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.605 [2024-04-24 21:41:05.359127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.605 [2024-04-24 21:41:05.359141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.605 [2024-04-24 21:41:05.359150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.605 [2024-04-24 21:41:05.359316] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.605 [2024-04-24 21:41:05.359488] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.605 [2024-04-24 21:41:05.359500] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.605 [2024-04-24 21:41:05.359509] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.605 [2024-04-24 21:41:05.362083] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.605 [2024-04-24 21:41:05.370938] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.605 [2024-04-24 21:41:05.371600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.605 [2024-04-24 21:41:05.372075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.605 [2024-04-24 21:41:05.372088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.605 [2024-04-24 21:41:05.372098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.605 [2024-04-24 21:41:05.372262] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.605 [2024-04-24 21:41:05.372426] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.605 [2024-04-24 21:41:05.372438] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.605 [2024-04-24 21:41:05.372447] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.605 [2024-04-24 21:41:05.375028] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.605 [2024-04-24 21:41:05.383714] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.605 [2024-04-24 21:41:05.384289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.605 [2024-04-24 21:41:05.384814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.605 [2024-04-24 21:41:05.384855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.605 [2024-04-24 21:41:05.384888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.605 [2024-04-24 21:41:05.385345] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.605 [2024-04-24 21:41:05.385517] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.605 [2024-04-24 21:41:05.385528] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.605 [2024-04-24 21:41:05.385537] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.605 [2024-04-24 21:41:05.388110] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.605 [2024-04-24 21:41:05.396627] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.605 [2024-04-24 21:41:05.397299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.605 [2024-04-24 21:41:05.397837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.605 [2024-04-24 21:41:05.397879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.605 [2024-04-24 21:41:05.397912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.605 [2024-04-24 21:41:05.398346] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.606 [2024-04-24 21:41:05.398517] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.606 [2024-04-24 21:41:05.398529] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.606 [2024-04-24 21:41:05.398538] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.606 [2024-04-24 21:41:05.402071] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.606 [2024-04-24 21:41:05.409979] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.606 [2024-04-24 21:41:05.410644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.606 [2024-04-24 21:41:05.411201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.606 [2024-04-24 21:41:05.411241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.606 [2024-04-24 21:41:05.411275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.606 [2024-04-24 21:41:05.411665] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.606 [2024-04-24 21:41:05.411822] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.606 [2024-04-24 21:41:05.411833] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.606 [2024-04-24 21:41:05.411841] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.606 [2024-04-24 21:41:05.414283] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.606 [2024-04-24 21:41:05.422705] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.606 [2024-04-24 21:41:05.423386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.606 [2024-04-24 21:41:05.423900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.606 [2024-04-24 21:41:05.423942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.606 [2024-04-24 21:41:05.423976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.606 [2024-04-24 21:41:05.424389] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.606 [2024-04-24 21:41:05.424553] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.606 [2024-04-24 21:41:05.424564] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.606 [2024-04-24 21:41:05.424572] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.606 [2024-04-24 21:41:05.427007] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.606 [2024-04-24 21:41:05.435358] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.606 [2024-04-24 21:41:05.436036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.606 [2024-04-24 21:41:05.436597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.606 [2024-04-24 21:41:05.436639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.606 [2024-04-24 21:41:05.436672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.606 [2024-04-24 21:41:05.437257] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.606 [2024-04-24 21:41:05.437490] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.606 [2024-04-24 21:41:05.437500] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.606 [2024-04-24 21:41:05.437509] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.606 [2024-04-24 21:41:05.439958] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.606 [2024-04-24 21:41:05.448033] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.606 [2024-04-24 21:41:05.448700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.606 [2024-04-24 21:41:05.449254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.606 [2024-04-24 21:41:05.449294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.606 [2024-04-24 21:41:05.449326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.606 [2024-04-24 21:41:05.449854] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.606 [2024-04-24 21:41:05.450062] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.606 [2024-04-24 21:41:05.450077] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.606 [2024-04-24 21:41:05.450089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.606 [2024-04-24 21:41:05.453798] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.606 [2024-04-24 21:41:05.461405] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.606 [2024-04-24 21:41:05.462084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.606 [2024-04-24 21:41:05.462627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.606 [2024-04-24 21:41:05.462668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.606 [2024-04-24 21:41:05.462700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.606 [2024-04-24 21:41:05.463286] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.606 [2024-04-24 21:41:05.463569] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.606 [2024-04-24 21:41:05.463581] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.606 [2024-04-24 21:41:05.463589] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.606 [2024-04-24 21:41:05.466098] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.606 [2024-04-24 21:41:05.474140] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.606 [2024-04-24 21:41:05.474803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.606 [2024-04-24 21:41:05.475359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.606 [2024-04-24 21:41:05.475398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.606 [2024-04-24 21:41:05.475430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.606 [2024-04-24 21:41:05.475657] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.606 [2024-04-24 21:41:05.475814] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.606 [2024-04-24 21:41:05.475825] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.606 [2024-04-24 21:41:05.475833] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.606 [2024-04-24 21:41:05.478271] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.606 [2024-04-24 21:41:05.486929] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.606 [2024-04-24 21:41:05.487597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.606 [2024-04-24 21:41:05.488050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.606 [2024-04-24 21:41:05.488065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.606 [2024-04-24 21:41:05.488075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.606 [2024-04-24 21:41:05.488243] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.606 [2024-04-24 21:41:05.488408] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.606 [2024-04-24 21:41:05.488419] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.606 [2024-04-24 21:41:05.488429] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.866 [2024-04-24 21:41:05.491096] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.866 [2024-04-24 21:41:05.499725] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.866 [2024-04-24 21:41:05.500395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.866 [2024-04-24 21:41:05.500964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.866 [2024-04-24 21:41:05.501006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.866 [2024-04-24 21:41:05.501048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.866 [2024-04-24 21:41:05.501651] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.866 [2024-04-24 21:41:05.501999] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.866 [2024-04-24 21:41:05.502010] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.866 [2024-04-24 21:41:05.502019] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.866 [2024-04-24 21:41:05.504463] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.866 [2024-04-24 21:41:05.512395] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.866 [2024-04-24 21:41:05.513061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.866 [2024-04-24 21:41:05.513552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.866 [2024-04-24 21:41:05.513593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.866 [2024-04-24 21:41:05.513627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.866 [2024-04-24 21:41:05.514215] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.866 [2024-04-24 21:41:05.514761] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.866 [2024-04-24 21:41:05.514772] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.866 [2024-04-24 21:41:05.514780] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.866 [2024-04-24 21:41:05.517224] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.866 [2024-04-24 21:41:05.525139] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.866 [2024-04-24 21:41:05.525802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.866 [2024-04-24 21:41:05.526275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.866 [2024-04-24 21:41:05.526315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.866 [2024-04-24 21:41:05.526348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.866 [2024-04-24 21:41:05.526674] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.866 [2024-04-24 21:41:05.526833] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.866 [2024-04-24 21:41:05.526843] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.866 [2024-04-24 21:41:05.526852] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.866 [2024-04-24 21:41:05.529289] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.866 [2024-04-24 21:41:05.537802] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.866 [2024-04-24 21:41:05.538463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.866 [2024-04-24 21:41:05.538951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.866 [2024-04-24 21:41:05.538991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.866 [2024-04-24 21:41:05.539031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.866 [2024-04-24 21:41:05.539647] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.866 [2024-04-24 21:41:05.540099] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.866 [2024-04-24 21:41:05.540109] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.866 [2024-04-24 21:41:05.540118] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.866 [2024-04-24 21:41:05.542562] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.866 [2024-04-24 21:41:05.550524] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.866 [2024-04-24 21:41:05.551233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.867 [2024-04-24 21:41:05.551758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.867 [2024-04-24 21:41:05.551773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.867 [2024-04-24 21:41:05.551783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.867 [2024-04-24 21:41:05.551962] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.867 [2024-04-24 21:41:05.552128] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.867 [2024-04-24 21:41:05.552140] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.867 [2024-04-24 21:41:05.552148] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.867 [2024-04-24 21:41:05.554837] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.867 [2024-04-24 21:41:05.563476] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.867 [2024-04-24 21:41:05.564091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.867 [2024-04-24 21:41:05.564554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.867 [2024-04-24 21:41:05.564595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.867 [2024-04-24 21:41:05.564629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.867 [2024-04-24 21:41:05.564927] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.867 [2024-04-24 21:41:05.565084] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.867 [2024-04-24 21:41:05.565095] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.867 [2024-04-24 21:41:05.565104] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.867 [2024-04-24 21:41:05.567539] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.867 [2024-04-24 21:41:05.576176] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.867 [2024-04-24 21:41:05.576757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.867 [2024-04-24 21:41:05.577337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.867 [2024-04-24 21:41:05.577377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.867 [2024-04-24 21:41:05.577409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.867 [2024-04-24 21:41:05.577801] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.867 [2024-04-24 21:41:05.577958] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.867 [2024-04-24 21:41:05.577969] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.867 [2024-04-24 21:41:05.577977] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.867 [2024-04-24 21:41:05.580415] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.867 [2024-04-24 21:41:05.589143] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.867 [2024-04-24 21:41:05.589825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.867 [2024-04-24 21:41:05.590299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.867 [2024-04-24 21:41:05.590312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.867 [2024-04-24 21:41:05.590322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.867 [2024-04-24 21:41:05.590493] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.867 [2024-04-24 21:41:05.590658] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.867 [2024-04-24 21:41:05.590669] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.867 [2024-04-24 21:41:05.590678] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.867 [2024-04-24 21:41:05.593326] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.867 [2024-04-24 21:41:05.602087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.867 [2024-04-24 21:41:05.602752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.867 [2024-04-24 21:41:05.603230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.867 [2024-04-24 21:41:05.603243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.867 [2024-04-24 21:41:05.603253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.867 [2024-04-24 21:41:05.603422] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.867 [2024-04-24 21:41:05.603598] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.867 [2024-04-24 21:41:05.603610] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.867 [2024-04-24 21:41:05.603619] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.867 [2024-04-24 21:41:05.606273] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.867 [2024-04-24 21:41:05.615020] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.867 [2024-04-24 21:41:05.615679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.867 [2024-04-24 21:41:05.616171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.867 [2024-04-24 21:41:05.616184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.867 [2024-04-24 21:41:05.616194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.867 [2024-04-24 21:41:05.616358] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.867 [2024-04-24 21:41:05.616547] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.867 [2024-04-24 21:41:05.616559] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.867 [2024-04-24 21:41:05.616568] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.867 [2024-04-24 21:41:05.619220] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.867 [2024-04-24 21:41:05.627989] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.867 [2024-04-24 21:41:05.628655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.867 [2024-04-24 21:41:05.629151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.867 [2024-04-24 21:41:05.629165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.867 [2024-04-24 21:41:05.629175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.867 [2024-04-24 21:41:05.629344] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.867 [2024-04-24 21:41:05.629519] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.867 [2024-04-24 21:41:05.629531] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.867 [2024-04-24 21:41:05.629540] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.867 [2024-04-24 21:41:05.632191] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.867 [2024-04-24 21:41:05.640954] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.867 [2024-04-24 21:41:05.641622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.867 [2024-04-24 21:41:05.642024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.867 [2024-04-24 21:41:05.642037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.867 [2024-04-24 21:41:05.642047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.867 [2024-04-24 21:41:05.642216] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.867 [2024-04-24 21:41:05.642385] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.867 [2024-04-24 21:41:05.642396] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.867 [2024-04-24 21:41:05.642406] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.867 [2024-04-24 21:41:05.645059] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.867 [2024-04-24 21:41:05.653805] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.867 [2024-04-24 21:41:05.654470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.867 [2024-04-24 21:41:05.654962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.867 [2024-04-24 21:41:05.654975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.867 [2024-04-24 21:41:05.654985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.867 [2024-04-24 21:41:05.655154] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.867 [2024-04-24 21:41:05.655322] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.867 [2024-04-24 21:41:05.655337] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.867 [2024-04-24 21:41:05.655346] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.867 [2024-04-24 21:41:05.658166] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.867 [2024-04-24 21:41:05.666804] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.867 [2024-04-24 21:41:05.667402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.867 [2024-04-24 21:41:05.667875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.868 [2024-04-24 21:41:05.667889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.868 [2024-04-24 21:41:05.667899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.868 [2024-04-24 21:41:05.668069] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.868 [2024-04-24 21:41:05.668240] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.868 [2024-04-24 21:41:05.668251] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.868 [2024-04-24 21:41:05.668261] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.868 [2024-04-24 21:41:05.670916] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.868 [2024-04-24 21:41:05.679692] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.868 [2024-04-24 21:41:05.680357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.868 [2024-04-24 21:41:05.680808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.868 [2024-04-24 21:41:05.680824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.868 [2024-04-24 21:41:05.680834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.868 [2024-04-24 21:41:05.681004] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.868 [2024-04-24 21:41:05.681174] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.868 [2024-04-24 21:41:05.681186] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.868 [2024-04-24 21:41:05.681195] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.868 [2024-04-24 21:41:05.683847] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.868 [2024-04-24 21:41:05.692607] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.868 [2024-04-24 21:41:05.693250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.868 [2024-04-24 21:41:05.693722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.868 [2024-04-24 21:41:05.693736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.868 [2024-04-24 21:41:05.693746] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.868 [2024-04-24 21:41:05.693916] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.868 [2024-04-24 21:41:05.694086] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.868 [2024-04-24 21:41:05.694097] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.868 [2024-04-24 21:41:05.694110] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.868 [2024-04-24 21:41:05.696762] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.868 [2024-04-24 21:41:05.705518] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.868 [2024-04-24 21:41:05.706114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.868 [2024-04-24 21:41:05.706590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.868 [2024-04-24 21:41:05.706604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.868 [2024-04-24 21:41:05.706614] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.868 [2024-04-24 21:41:05.706783] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.868 [2024-04-24 21:41:05.706953] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.868 [2024-04-24 21:41:05.706964] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.868 [2024-04-24 21:41:05.706973] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.868 [2024-04-24 21:41:05.709624] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.868 [2024-04-24 21:41:05.718399] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.868 [2024-04-24 21:41:05.719077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.868 [2024-04-24 21:41:05.719474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.868 [2024-04-24 21:41:05.719488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.868 [2024-04-24 21:41:05.719499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.868 [2024-04-24 21:41:05.719668] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.868 [2024-04-24 21:41:05.719843] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.868 [2024-04-24 21:41:05.719855] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.868 [2024-04-24 21:41:05.719864] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.868 [2024-04-24 21:41:05.722521] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.868 [2024-04-24 21:41:05.731291] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.868 [2024-04-24 21:41:05.731963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.868 [2024-04-24 21:41:05.732441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.868 [2024-04-24 21:41:05.732459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.868 [2024-04-24 21:41:05.732470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.868 [2024-04-24 21:41:05.732638] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.868 [2024-04-24 21:41:05.732808] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.868 [2024-04-24 21:41:05.732819] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.868 [2024-04-24 21:41:05.732828] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.868 [2024-04-24 21:41:05.735484] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:42.868 [2024-04-24 21:41:05.744268] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.868 [2024-04-24 21:41:05.744940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.868 [2024-04-24 21:41:05.745342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.868 [2024-04-24 21:41:05.745355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:42.868 [2024-04-24 21:41:05.745365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:42.868 [2024-04-24 21:41:05.745540] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:42.868 [2024-04-24 21:41:05.745710] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:42.868 [2024-04-24 21:41:05.745722] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:42.868 [2024-04-24 21:41:05.745730] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.868 [2024-04-24 21:41:05.748410] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.127 [2024-04-24 21:41:05.757277] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.127 [2024-04-24 21:41:05.757964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.128 [2024-04-24 21:41:05.758443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.128 [2024-04-24 21:41:05.758464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.128 [2024-04-24 21:41:05.758474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.128 [2024-04-24 21:41:05.758645] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.128 [2024-04-24 21:41:05.758814] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.128 [2024-04-24 21:41:05.758826] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.128 [2024-04-24 21:41:05.758835] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.128 [2024-04-24 21:41:05.761494] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.128 [2024-04-24 21:41:05.770263] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.128 [2024-04-24 21:41:05.770916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.128 [2024-04-24 21:41:05.771391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.128 [2024-04-24 21:41:05.771404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.128 [2024-04-24 21:41:05.771414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.128 [2024-04-24 21:41:05.771590] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.128 [2024-04-24 21:41:05.771759] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.128 [2024-04-24 21:41:05.771771] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.128 [2024-04-24 21:41:05.771780] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.128 [2024-04-24 21:41:05.774427] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.128 [2024-04-24 21:41:05.783189] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.128 [2024-04-24 21:41:05.783858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.128 [2024-04-24 21:41:05.784309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.128 [2024-04-24 21:41:05.784323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.128 [2024-04-24 21:41:05.784332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.128 [2024-04-24 21:41:05.784507] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.128 [2024-04-24 21:41:05.784676] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.128 [2024-04-24 21:41:05.784688] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.128 [2024-04-24 21:41:05.784696] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.128 [2024-04-24 21:41:05.787354] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.128 [2024-04-24 21:41:05.796127] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.128 [2024-04-24 21:41:05.796710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.128 [2024-04-24 21:41:05.797187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.128 [2024-04-24 21:41:05.797200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.128 [2024-04-24 21:41:05.797210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.128 [2024-04-24 21:41:05.797380] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.128 [2024-04-24 21:41:05.797553] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.128 [2024-04-24 21:41:05.797565] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.128 [2024-04-24 21:41:05.797576] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.128 [2024-04-24 21:41:05.800226] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.128 [2024-04-24 21:41:05.808996] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.128 [2024-04-24 21:41:05.809610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.128 [2024-04-24 21:41:05.810082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.128 [2024-04-24 21:41:05.810096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.128 [2024-04-24 21:41:05.810105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.128 [2024-04-24 21:41:05.810275] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.128 [2024-04-24 21:41:05.810445] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.128 [2024-04-24 21:41:05.810461] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.128 [2024-04-24 21:41:05.810470] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.128 [2024-04-24 21:41:05.813119] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.128 [2024-04-24 21:41:05.821888] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.128 [2024-04-24 21:41:05.822556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.128 [2024-04-24 21:41:05.823014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.128 [2024-04-24 21:41:05.823027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.128 [2024-04-24 21:41:05.823038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.128 [2024-04-24 21:41:05.823207] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.128 [2024-04-24 21:41:05.823376] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.128 [2024-04-24 21:41:05.823388] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.128 [2024-04-24 21:41:05.823397] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.128 [2024-04-24 21:41:05.826057] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.128 [2024-04-24 21:41:05.834837] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.128 [2024-04-24 21:41:05.835498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.128 [2024-04-24 21:41:05.835976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.128 [2024-04-24 21:41:05.835989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.128 [2024-04-24 21:41:05.835999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.128 [2024-04-24 21:41:05.836168] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.128 [2024-04-24 21:41:05.836337] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.128 [2024-04-24 21:41:05.836349] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.128 [2024-04-24 21:41:05.836358] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.128 [2024-04-24 21:41:05.839012] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.128 [2024-04-24 21:41:05.847785] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.128 [2024-04-24 21:41:05.848475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.128 [2024-04-24 21:41:05.849000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.128 [2024-04-24 21:41:05.849039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.128 [2024-04-24 21:41:05.849071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.128 [2024-04-24 21:41:05.849655] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.128 [2024-04-24 21:41:05.849827] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.128 [2024-04-24 21:41:05.849838] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.128 [2024-04-24 21:41:05.849848] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.128 [2024-04-24 21:41:05.852504] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.128 [2024-04-24 21:41:05.860637] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.128 [2024-04-24 21:41:05.861219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.128 [2024-04-24 21:41:05.861682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.128 [2024-04-24 21:41:05.861731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.128 [2024-04-24 21:41:05.861764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.129 [2024-04-24 21:41:05.862350] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.129 [2024-04-24 21:41:05.862880] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.129 [2024-04-24 21:41:05.862892] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.129 [2024-04-24 21:41:05.862901] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.129 [2024-04-24 21:41:05.865512] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.129 [2024-04-24 21:41:05.873486] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.129 [2024-04-24 21:41:05.874086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.129 [2024-04-24 21:41:05.874628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.129 [2024-04-24 21:41:05.874647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.129 [2024-04-24 21:41:05.874661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.129 [2024-04-24 21:41:05.874898] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.129 [2024-04-24 21:41:05.875135] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.129 [2024-04-24 21:41:05.875150] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.129 [2024-04-24 21:41:05.875163] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.129 [2024-04-24 21:41:05.878876] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.129 [2024-04-24 21:41:05.886800] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.129 [2024-04-24 21:41:05.887463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.129 [2024-04-24 21:41:05.887894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.129 [2024-04-24 21:41:05.887934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.129 [2024-04-24 21:41:05.887966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.129 [2024-04-24 21:41:05.888494] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.129 [2024-04-24 21:41:05.888656] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.129 [2024-04-24 21:41:05.888667] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.129 [2024-04-24 21:41:05.888676] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.129 [2024-04-24 21:41:05.891247] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.129 [2024-04-24 21:41:05.899592] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.129 [2024-04-24 21:41:05.900186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.129 [2024-04-24 21:41:05.900698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.129 [2024-04-24 21:41:05.900739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.129 [2024-04-24 21:41:05.900785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.129 [2024-04-24 21:41:05.901373] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.129 [2024-04-24 21:41:05.901812] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.129 [2024-04-24 21:41:05.901823] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.129 [2024-04-24 21:41:05.901832] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.129 [2024-04-24 21:41:05.904334] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.129 [2024-04-24 21:41:05.912291] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.129 [2024-04-24 21:41:05.912941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.129 [2024-04-24 21:41:05.913503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.129 [2024-04-24 21:41:05.913557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.129 [2024-04-24 21:41:05.913567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.129 [2024-04-24 21:41:05.913724] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.129 [2024-04-24 21:41:05.913880] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.129 [2024-04-24 21:41:05.913891] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.129 [2024-04-24 21:41:05.913900] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.129 [2024-04-24 21:41:05.916406] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.129 [2024-04-24 21:41:05.925124] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.129 [2024-04-24 21:41:05.925799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.129 [2024-04-24 21:41:05.926339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.129 [2024-04-24 21:41:05.926379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.129 [2024-04-24 21:41:05.926411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.129 [2024-04-24 21:41:05.926877] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.129 [2024-04-24 21:41:05.927035] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.129 [2024-04-24 21:41:05.927045] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.129 [2024-04-24 21:41:05.927054] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.129 [2024-04-24 21:41:05.929573] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.129 [2024-04-24 21:41:05.937835] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.129 [2024-04-24 21:41:05.938498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.129 [2024-04-24 21:41:05.938957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.129 [2024-04-24 21:41:05.938997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.129 [2024-04-24 21:41:05.939029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.129 [2024-04-24 21:41:05.939668] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.129 [2024-04-24 21:41:05.939826] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.129 [2024-04-24 21:41:05.939836] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.129 [2024-04-24 21:41:05.939846] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.129 [2024-04-24 21:41:05.942286] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.129 [2024-04-24 21:41:05.950546] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.129 [2024-04-24 21:41:05.951151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.129 [2024-04-24 21:41:05.951698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.129 [2024-04-24 21:41:05.951712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.129 [2024-04-24 21:41:05.951722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.129 [2024-04-24 21:41:05.951880] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.129 [2024-04-24 21:41:05.952036] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.129 [2024-04-24 21:41:05.952047] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.129 [2024-04-24 21:41:05.952056] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.129 [2024-04-24 21:41:05.954589] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.129 [2024-04-24 21:41:05.963306] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.129 [2024-04-24 21:41:05.963981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.129 [2024-04-24 21:41:05.964508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.129 [2024-04-24 21:41:05.964549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.129 [2024-04-24 21:41:05.964583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.129 [2024-04-24 21:41:05.964964] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.129 [2024-04-24 21:41:05.965122] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.129 [2024-04-24 21:41:05.965133] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.129 [2024-04-24 21:41:05.965141] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.129 [2024-04-24 21:41:05.967667] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.129 [2024-04-24 21:41:05.976091] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.129 [2024-04-24 21:41:05.976775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.129 [2024-04-24 21:41:05.977280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.130 [2024-04-24 21:41:05.977320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.130 [2024-04-24 21:41:05.977352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.130 [2024-04-24 21:41:05.977951] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.130 [2024-04-24 21:41:05.978129] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.130 [2024-04-24 21:41:05.978140] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.130 [2024-04-24 21:41:05.978148] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.130 [2024-04-24 21:41:05.980753] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.130 [2024-04-24 21:41:05.989008] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.130 [2024-04-24 21:41:05.989690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.130 [2024-04-24 21:41:05.990086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.130 [2024-04-24 21:41:05.990099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.130 [2024-04-24 21:41:05.990108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.130 [2024-04-24 21:41:05.990264] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.130 [2024-04-24 21:41:05.990420] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.130 [2024-04-24 21:41:05.990431] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.130 [2024-04-24 21:41:05.990440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.130 [2024-04-24 21:41:05.992923] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.130 [2024-04-24 21:41:06.001854] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.130 [2024-04-24 21:41:06.002513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.130 [2024-04-24 21:41:06.002944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.130 [2024-04-24 21:41:06.002958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.130 [2024-04-24 21:41:06.002967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.130 [2024-04-24 21:41:06.003132] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.130 [2024-04-24 21:41:06.003297] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.130 [2024-04-24 21:41:06.003308] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.130 [2024-04-24 21:41:06.003318] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.130 [2024-04-24 21:41:06.005987] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.390 [2024-04-24 21:41:06.014849] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.390 [2024-04-24 21:41:06.015510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.015985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.015999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.390 [2024-04-24 21:41:06.016008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.390 [2024-04-24 21:41:06.016178] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.390 [2024-04-24 21:41:06.016347] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.390 [2024-04-24 21:41:06.016362] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.390 [2024-04-24 21:41:06.016372] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.390 [2024-04-24 21:41:06.019037] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.390 [2024-04-24 21:41:06.027838] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.390 [2024-04-24 21:41:06.028511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.028939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.028952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.390 [2024-04-24 21:41:06.028962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.390 [2024-04-24 21:41:06.029133] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.390 [2024-04-24 21:41:06.029304] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.390 [2024-04-24 21:41:06.029316] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.390 [2024-04-24 21:41:06.029325] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.390 [2024-04-24 21:41:06.031986] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.390 [2024-04-24 21:41:06.040775] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.390 [2024-04-24 21:41:06.041447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.041947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.041960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.390 [2024-04-24 21:41:06.041971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.390 [2024-04-24 21:41:06.042140] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.390 [2024-04-24 21:41:06.042311] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.390 [2024-04-24 21:41:06.042323] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.390 [2024-04-24 21:41:06.042332] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.390 [2024-04-24 21:41:06.044984] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.390 [2024-04-24 21:41:06.053747] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.390 [2024-04-24 21:41:06.054414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.054871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.054885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.390 [2024-04-24 21:41:06.054895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.390 [2024-04-24 21:41:06.055065] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.390 [2024-04-24 21:41:06.055235] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.390 [2024-04-24 21:41:06.055247] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.390 [2024-04-24 21:41:06.055260] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.390 [2024-04-24 21:41:06.057917] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.390 [2024-04-24 21:41:06.066691] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.390 [2024-04-24 21:41:06.067626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.068149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.068194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.390 [2024-04-24 21:41:06.068229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.390 [2024-04-24 21:41:06.068848] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.390 [2024-04-24 21:41:06.069273] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.390 [2024-04-24 21:41:06.069285] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.390 [2024-04-24 21:41:06.069294] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.390 [2024-04-24 21:41:06.071960] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.390 [2024-04-24 21:41:06.079687] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.390 [2024-04-24 21:41:06.080345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.080764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.080807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.390 [2024-04-24 21:41:06.080839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.390 [2024-04-24 21:41:06.081244] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.390 [2024-04-24 21:41:06.081402] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.390 [2024-04-24 21:41:06.081413] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.390 [2024-04-24 21:41:06.081421] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.390 [2024-04-24 21:41:06.084016] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.390 [2024-04-24 21:41:06.092406] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.390 [2024-04-24 21:41:06.093018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.093379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.093419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.390 [2024-04-24 21:41:06.093465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.390 [2024-04-24 21:41:06.093977] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.390 [2024-04-24 21:41:06.094134] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.390 [2024-04-24 21:41:06.094145] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.390 [2024-04-24 21:41:06.094154] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.390 [2024-04-24 21:41:06.096626] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.390 [2024-04-24 21:41:06.105247] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.390 [2024-04-24 21:41:06.105824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.106281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.106321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.390 [2024-04-24 21:41:06.106355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.390 [2024-04-24 21:41:06.106955] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.390 [2024-04-24 21:41:06.107245] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.390 [2024-04-24 21:41:06.107261] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.390 [2024-04-24 21:41:06.107273] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.390 [2024-04-24 21:41:06.110990] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.390 [2024-04-24 21:41:06.118496] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.390 [2024-04-24 21:41:06.119072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.119610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.119652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.390 [2024-04-24 21:41:06.119684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.390 [2024-04-24 21:41:06.120117] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.390 [2024-04-24 21:41:06.120283] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.390 [2024-04-24 21:41:06.120294] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.390 [2024-04-24 21:41:06.120303] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.390 [2024-04-24 21:41:06.122774] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.390 [2024-04-24 21:41:06.131213] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.390 [2024-04-24 21:41:06.131782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.132190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.132230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.390 [2024-04-24 21:41:06.132262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.390 [2024-04-24 21:41:06.132861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.390 [2024-04-24 21:41:06.133362] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.390 [2024-04-24 21:41:06.133374] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.390 [2024-04-24 21:41:06.133382] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.390 [2024-04-24 21:41:06.135855] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.390 [2024-04-24 21:41:06.143973] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.390 [2024-04-24 21:41:06.144613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.145013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.145053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.390 [2024-04-24 21:41:06.145086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.390 [2024-04-24 21:41:06.145532] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.390 [2024-04-24 21:41:06.145703] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.390 [2024-04-24 21:41:06.145714] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.390 [2024-04-24 21:41:06.145722] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.390 [2024-04-24 21:41:06.148163] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.390 [2024-04-24 21:41:06.156776] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.390 [2024-04-24 21:41:06.157352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.157889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.157931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.390 [2024-04-24 21:41:06.157964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.390 [2024-04-24 21:41:06.158359] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.390 [2024-04-24 21:41:06.158539] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.390 [2024-04-24 21:41:06.158557] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.390 [2024-04-24 21:41:06.158567] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.390 [2024-04-24 21:41:06.161073] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.390 [2024-04-24 21:41:06.169675] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.390 [2024-04-24 21:41:06.170271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.170674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.170687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.390 [2024-04-24 21:41:06.170696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.390 [2024-04-24 21:41:06.170851] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.390 [2024-04-24 21:41:06.171008] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.390 [2024-04-24 21:41:06.171019] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.390 [2024-04-24 21:41:06.171027] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.390 [2024-04-24 21:41:06.173543] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.390 [2024-04-24 21:41:06.182378] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.390 [2024-04-24 21:41:06.182987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.183447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.183502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.390 [2024-04-24 21:41:06.183536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.390 [2024-04-24 21:41:06.184121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.390 [2024-04-24 21:41:06.184297] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.390 [2024-04-24 21:41:06.184308] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.390 [2024-04-24 21:41:06.184316] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.390 [2024-04-24 21:41:06.186839] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.390 [2024-04-24 21:41:06.195104] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.390 [2024-04-24 21:41:06.195777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.196198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.196238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.390 [2024-04-24 21:41:06.196271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.390 [2024-04-24 21:41:06.196871] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.390 [2024-04-24 21:41:06.197471] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.390 [2024-04-24 21:41:06.197506] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.390 [2024-04-24 21:41:06.197537] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.390 [2024-04-24 21:41:06.200983] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.390 [2024-04-24 21:41:06.208690] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.390 [2024-04-24 21:41:06.209336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.209852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.209895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.390 [2024-04-24 21:41:06.209927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.390 [2024-04-24 21:41:06.210326] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.390 [2024-04-24 21:41:06.210497] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.390 [2024-04-24 21:41:06.210508] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.390 [2024-04-24 21:41:06.210518] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.390 [2024-04-24 21:41:06.212975] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.390 [2024-04-24 21:41:06.221339] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.390 [2024-04-24 21:41:06.221934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.222477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.390 [2024-04-24 21:41:06.222526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.390 [2024-04-24 21:41:06.222559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.390 [2024-04-24 21:41:06.223145] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.390 [2024-04-24 21:41:06.223555] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.390 [2024-04-24 21:41:06.223567] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.390 [2024-04-24 21:41:06.223576] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.390 [2024-04-24 21:41:06.226030] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.391 [2024-04-24 21:41:06.233987] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.391 [2024-04-24 21:41:06.234393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.391 [2024-04-24 21:41:06.234816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.391 [2024-04-24 21:41:06.234871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.391 [2024-04-24 21:41:06.234905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.391 [2024-04-24 21:41:06.235433] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.391 [2024-04-24 21:41:06.235626] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.391 [2024-04-24 21:41:06.235637] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.391 [2024-04-24 21:41:06.235646] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.391 [2024-04-24 21:41:06.238084] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.391 [2024-04-24 21:41:06.246785] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.391 [2024-04-24 21:41:06.247477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.391 [2024-04-24 21:41:06.248015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.391 [2024-04-24 21:41:06.248056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.391 [2024-04-24 21:41:06.248089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.391 [2024-04-24 21:41:06.248489] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.391 [2024-04-24 21:41:06.248726] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.391 [2024-04-24 21:41:06.248741] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.391 [2024-04-24 21:41:06.248753] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.391 [2024-04-24 21:41:06.252469] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.391 [2024-04-24 21:41:06.259928] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.391 [2024-04-24 21:41:06.260596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.391 [2024-04-24 21:41:06.261113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.391 [2024-04-24 21:41:06.261152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.391 [2024-04-24 21:41:06.261192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.391 [2024-04-24 21:41:06.261604] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.391 [2024-04-24 21:41:06.261770] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.391 [2024-04-24 21:41:06.261781] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.391 [2024-04-24 21:41:06.261790] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.391 [2024-04-24 21:41:06.264332] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.391 [2024-04-24 21:41:06.272697] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.391 [2024-04-24 21:41:06.273368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.391 [2024-04-24 21:41:06.273827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.391 [2024-04-24 21:41:06.273845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.391 [2024-04-24 21:41:06.273855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.391 [2024-04-24 21:41:06.274035] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.391 [2024-04-24 21:41:06.274207] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.391 [2024-04-24 21:41:06.274219] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.391 [2024-04-24 21:41:06.274228] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.650 [2024-04-24 21:41:06.276903] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.650 [2024-04-24 21:41:06.285414] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.650 [2024-04-24 21:41:06.286009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.650 [2024-04-24 21:41:06.286525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.650 [2024-04-24 21:41:06.286567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.650 [2024-04-24 21:41:06.286600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.650 [2024-04-24 21:41:06.287188] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.650 [2024-04-24 21:41:06.287515] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.650 [2024-04-24 21:41:06.287526] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.650 [2024-04-24 21:41:06.287535] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.651 [2024-04-24 21:41:06.289988] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.651 [2024-04-24 21:41:06.298048] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.651 [2024-04-24 21:41:06.298634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.299124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.299164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.651 [2024-04-24 21:41:06.299196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.651 [2024-04-24 21:41:06.299807] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.651 [2024-04-24 21:41:06.300026] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.651 [2024-04-24 21:41:06.300037] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.651 [2024-04-24 21:41:06.300046] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.651 [2024-04-24 21:41:06.302527] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.651 [2024-04-24 21:41:06.310689] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.651 [2024-04-24 21:41:06.311331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.311743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.311781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.651 [2024-04-24 21:41:06.311790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.651 [2024-04-24 21:41:06.311948] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.651 [2024-04-24 21:41:06.312104] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.651 [2024-04-24 21:41:06.312115] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.651 [2024-04-24 21:41:06.312124] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.651 [2024-04-24 21:41:06.314597] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.651 [2024-04-24 21:41:06.323449] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.651 [2024-04-24 21:41:06.324127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.324334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.324346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.651 [2024-04-24 21:41:06.324355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.651 [2024-04-24 21:41:06.324534] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.651 [2024-04-24 21:41:06.324699] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.651 [2024-04-24 21:41:06.324710] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.651 [2024-04-24 21:41:06.324719] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.651 [2024-04-24 21:41:06.327411] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.651 [2024-04-24 21:41:06.336340] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.651 [2024-04-24 21:41:06.336965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.337488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.337530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.651 [2024-04-24 21:41:06.337562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.651 [2024-04-24 21:41:06.337718] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.651 [2024-04-24 21:41:06.337878] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.651 [2024-04-24 21:41:06.337889] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.651 [2024-04-24 21:41:06.337898] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.651 [2024-04-24 21:41:06.340344] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.651 [2024-04-24 21:41:06.349061] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.651 [2024-04-24 21:41:06.349738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.350222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.350261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.651 [2024-04-24 21:41:06.350294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.651 [2024-04-24 21:41:06.350728] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.651 [2024-04-24 21:41:06.350894] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.651 [2024-04-24 21:41:06.350905] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.651 [2024-04-24 21:41:06.350914] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.651 [2024-04-24 21:41:06.353395] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.651 [2024-04-24 21:41:06.361769] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.651 [2024-04-24 21:41:06.362429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.362926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.362966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.651 [2024-04-24 21:41:06.363000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.651 [2024-04-24 21:41:06.363600] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.651 [2024-04-24 21:41:06.364125] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.651 [2024-04-24 21:41:06.364136] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.651 [2024-04-24 21:41:06.364144] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.651 [2024-04-24 21:41:06.366677] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.651 [2024-04-24 21:41:06.374498] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.651 [2024-04-24 21:41:06.375080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.375554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.375568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.651 [2024-04-24 21:41:06.375578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.651 [2024-04-24 21:41:06.375743] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.651 [2024-04-24 21:41:06.375909] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.651 [2024-04-24 21:41:06.375923] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.651 [2024-04-24 21:41:06.375932] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.651 [2024-04-24 21:41:06.378513] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.651 [2024-04-24 21:41:06.387443] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.651 [2024-04-24 21:41:06.388076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.388427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.388441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.651 [2024-04-24 21:41:06.388455] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.651 [2024-04-24 21:41:06.388625] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.651 [2024-04-24 21:41:06.388795] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.651 [2024-04-24 21:41:06.388806] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.651 [2024-04-24 21:41:06.388815] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.651 [2024-04-24 21:41:06.391471] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.651 [2024-04-24 21:41:06.400396] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.651 [2024-04-24 21:41:06.401070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.401523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.401536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.651 [2024-04-24 21:41:06.401547] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.651 [2024-04-24 21:41:06.401717] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.651 [2024-04-24 21:41:06.401886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.651 [2024-04-24 21:41:06.401898] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.651 [2024-04-24 21:41:06.401907] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.651 [2024-04-24 21:41:06.404565] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.651 [2024-04-24 21:41:06.413321] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.651 [2024-04-24 21:41:06.413970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.414424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.414437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.651 [2024-04-24 21:41:06.414446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.651 [2024-04-24 21:41:06.414615] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.651 [2024-04-24 21:41:06.414780] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.651 [2024-04-24 21:41:06.414791] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.651 [2024-04-24 21:41:06.414803] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.651 [2024-04-24 21:41:06.417377] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.651 [2024-04-24 21:41:06.426222] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.651 [2024-04-24 21:41:06.426898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.427362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.651 [2024-04-24 21:41:06.427401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.651 [2024-04-24 21:41:06.427434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.651 [2024-04-24 21:41:06.428032] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.651 [2024-04-24 21:41:06.428280] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.651 [2024-04-24 21:41:06.428291] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.651 [2024-04-24 21:41:06.428300] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.651 [2024-04-24 21:41:06.430967] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.652 [2024-04-24 21:41:06.438899] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.652 [2024-04-24 21:41:06.439531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.652 [2024-04-24 21:41:06.440005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.652 [2024-04-24 21:41:06.440017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.652 [2024-04-24 21:41:06.440027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.652 [2024-04-24 21:41:06.440199] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.652 [2024-04-24 21:41:06.440365] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.652 [2024-04-24 21:41:06.440376] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.652 [2024-04-24 21:41:06.440386] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.652 [2024-04-24 21:41:06.443033] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.652 [2024-04-24 21:41:06.451864] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.652 [2024-04-24 21:41:06.452520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.652 [2024-04-24 21:41:06.452924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.652 [2024-04-24 21:41:06.452937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.652 [2024-04-24 21:41:06.452947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.652 [2024-04-24 21:41:06.453114] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.652 [2024-04-24 21:41:06.453280] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.652 [2024-04-24 21:41:06.453291] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.652 [2024-04-24 21:41:06.453300] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.652 [2024-04-24 21:41:06.455974] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.652 [2024-04-24 21:41:06.464746] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.652 [2024-04-24 21:41:06.465414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.652 [2024-04-24 21:41:06.465826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.652 [2024-04-24 21:41:06.465839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.652 [2024-04-24 21:41:06.465849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.652 [2024-04-24 21:41:06.466015] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.652 [2024-04-24 21:41:06.466181] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.652 [2024-04-24 21:41:06.466192] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.652 [2024-04-24 21:41:06.466201] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.652 [2024-04-24 21:41:06.468860] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.652 [2024-04-24 21:41:06.477698] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.652 [2024-04-24 21:41:06.478276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.652 [2024-04-24 21:41:06.478701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.652 [2024-04-24 21:41:06.478714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.652 [2024-04-24 21:41:06.478724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.652 [2024-04-24 21:41:06.478888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.652 [2024-04-24 21:41:06.479053] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.652 [2024-04-24 21:41:06.479064] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.652 [2024-04-24 21:41:06.479073] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.652 [2024-04-24 21:41:06.481708] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.652 [2024-04-24 21:41:06.490526] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.652 [2024-04-24 21:41:06.491188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.652 [2024-04-24 21:41:06.491588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.652 [2024-04-24 21:41:06.491601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.652 [2024-04-24 21:41:06.491612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.652 [2024-04-24 21:41:06.491776] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.652 [2024-04-24 21:41:06.491941] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.652 [2024-04-24 21:41:06.491952] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.652 [2024-04-24 21:41:06.491963] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.652 [2024-04-24 21:41:06.494537] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2996341 Killed "${NVMF_APP[@]}" "$@" 00:25:43.652 21:41:06 -- host/bdevperf.sh@36 -- # tgt_init 00:25:43.652 21:41:06 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:43.652 21:41:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:43.652 21:41:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:43.652 21:41:06 -- common/autotest_common.sh@10 -- # set +x 00:25:43.652 [2024-04-24 21:41:06.503505] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.652 [2024-04-24 21:41:06.504147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.652 [2024-04-24 21:41:06.504621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.652 [2024-04-24 21:41:06.504635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.652 [2024-04-24 21:41:06.504645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.652 [2024-04-24 21:41:06.504815] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.652 [2024-04-24 21:41:06.504985] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.652 [2024-04-24 21:41:06.504995] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.652 [2024-04-24 21:41:06.505004] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.652 [2024-04-24 21:41:06.507660] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.652 21:41:06 -- nvmf/common.sh@470 -- # nvmfpid=2997844 00:25:43.652 21:41:06 -- nvmf/common.sh@471 -- # waitforlisten 2997844 00:25:43.652 21:41:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:43.652 21:41:06 -- common/autotest_common.sh@817 -- # '[' -z 2997844 ']' 00:25:43.652 21:41:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.652 21:41:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:43.652 21:41:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.652 21:41:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:43.652 21:41:06 -- common/autotest_common.sh@10 -- # set +x 00:25:43.652 [2024-04-24 21:41:06.516441] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.652 [2024-04-24 21:41:06.517115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.652 [2024-04-24 21:41:06.517509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.652 [2024-04-24 21:41:06.517523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.652 [2024-04-24 21:41:06.517533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.652 [2024-04-24 21:41:06.517703] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.652 [2024-04-24 21:41:06.517873] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.652 [2024-04-24 21:41:06.517885] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.652 [2024-04-24 21:41:06.517894] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.652 [2024-04-24 21:41:06.520551] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.652 [2024-04-24 21:41:06.529324] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.652 [2024-04-24 21:41:06.529996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.652 [2024-04-24 21:41:06.530473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.652 [2024-04-24 21:41:06.530486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.652 [2024-04-24 21:41:06.530496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.652 [2024-04-24 21:41:06.530666] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.652 [2024-04-24 21:41:06.530836] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.652 [2024-04-24 21:41:06.530848] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.652 [2024-04-24 21:41:06.530856] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.652 [2024-04-24 21:41:06.533537] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.912 [2024-04-24 21:41:06.542267] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.912 [2024-04-24 21:41:06.542952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.912 [2024-04-24 21:41:06.543427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.912 [2024-04-24 21:41:06.543440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.912 [2024-04-24 21:41:06.543456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.912 [2024-04-24 21:41:06.543628] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.912 [2024-04-24 21:41:06.543799] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.912 [2024-04-24 21:41:06.543810] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.912 [2024-04-24 21:41:06.543819] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.912 [2024-04-24 21:41:06.546476] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.912 [2024-04-24 21:41:06.555240] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.912 [2024-04-24 21:41:06.555480] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:25:43.912 [2024-04-24 21:41:06.555526] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:43.912 [2024-04-24 21:41:06.555911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.912 [2024-04-24 21:41:06.556389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.912 [2024-04-24 21:41:06.556401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.912 [2024-04-24 21:41:06.556411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.913 [2024-04-24 21:41:06.556585] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.913 [2024-04-24 21:41:06.556755] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.913 [2024-04-24 21:41:06.556766] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.913 [2024-04-24 21:41:06.556775] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.913 [2024-04-24 21:41:06.559425] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.913 [2024-04-24 21:41:06.568150] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.913 [2024-04-24 21:41:06.568820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.913 [2024-04-24 21:41:06.569268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.913 [2024-04-24 21:41:06.569281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.913 [2024-04-24 21:41:06.569290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.913 [2024-04-24 21:41:06.569460] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.913 [2024-04-24 21:41:06.569626] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.913 [2024-04-24 21:41:06.569637] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.913 [2024-04-24 21:41:06.569646] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.913 [2024-04-24 21:41:06.572267] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.913 [2024-04-24 21:41:06.580996] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.913 [2024-04-24 21:41:06.581683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.913 [2024-04-24 21:41:06.582035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.913 [2024-04-24 21:41:06.582048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.913 [2024-04-24 21:41:06.582058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.913 [2024-04-24 21:41:06.582228] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.913 [2024-04-24 21:41:06.582398] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.913 [2024-04-24 21:41:06.582410] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.913 [2024-04-24 21:41:06.582419] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.913 [2024-04-24 21:41:06.585078] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.913 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.913 [2024-04-24 21:41:06.593992] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.913 [2024-04-24 21:41:06.594668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.913 [2024-04-24 21:41:06.595121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.913 [2024-04-24 21:41:06.595135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.913 [2024-04-24 21:41:06.595144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.913 [2024-04-24 21:41:06.595315] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.913 [2024-04-24 21:41:06.595503] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.913 [2024-04-24 21:41:06.595515] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.913 [2024-04-24 21:41:06.595528] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.913 [2024-04-24 21:41:06.598180] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.913 [2024-04-24 21:41:06.606931] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.913 [2024-04-24 21:41:06.607533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.913 [2024-04-24 21:41:06.607959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.913 [2024-04-24 21:41:06.607972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.913 [2024-04-24 21:41:06.607982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.913 [2024-04-24 21:41:06.608147] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.913 [2024-04-24 21:41:06.608313] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.913 [2024-04-24 21:41:06.608324] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.913 [2024-04-24 21:41:06.608333] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.913 [2024-04-24 21:41:06.610969] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.913 [2024-04-24 21:41:06.619762] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.913 [2024-04-24 21:41:06.620386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.913 [2024-04-24 21:41:06.620861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.913 [2024-04-24 21:41:06.620877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.913 [2024-04-24 21:41:06.620887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.913 [2024-04-24 21:41:06.621054] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.913 [2024-04-24 21:41:06.621219] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.913 [2024-04-24 21:41:06.621230] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.913 [2024-04-24 21:41:06.621239] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.913 [2024-04-24 21:41:06.623853] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.913 [2024-04-24 21:41:06.631409] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:43.913 [2024-04-24 21:41:06.632668] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.913 [2024-04-24 21:41:06.633317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.913 [2024-04-24 21:41:06.633770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.913 [2024-04-24 21:41:06.633786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.913 [2024-04-24 21:41:06.633796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.913 [2024-04-24 21:41:06.633964] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.913 [2024-04-24 21:41:06.634130] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.913 [2024-04-24 21:41:06.634141] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.913 [2024-04-24 21:41:06.634150] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.913 [2024-04-24 21:41:06.636735] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.913 [2024-04-24 21:41:06.645606] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.913 [2024-04-24 21:41:06.646282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.913 [2024-04-24 21:41:06.646760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.913 [2024-04-24 21:41:06.646773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.913 [2024-04-24 21:41:06.646783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.913 [2024-04-24 21:41:06.646948] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.913 [2024-04-24 21:41:06.647113] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.913 [2024-04-24 21:41:06.647125] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.913 [2024-04-24 21:41:06.647134] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.913 [2024-04-24 21:41:06.649738] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.913 [2024-04-24 21:41:06.658499] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.913 [2024-04-24 21:41:06.659137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.913 [2024-04-24 21:41:06.659587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.913 [2024-04-24 21:41:06.659602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.913 [2024-04-24 21:41:06.659612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.913 [2024-04-24 21:41:06.659779] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.913 [2024-04-24 21:41:06.659944] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.913 [2024-04-24 21:41:06.659956] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.913 [2024-04-24 21:41:06.659965] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.913 [2024-04-24 21:41:06.662543] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.914 [2024-04-24 21:41:06.671356] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.914 [2024-04-24 21:41:06.671905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.914 [2024-04-24 21:41:06.672309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.914 [2024-04-24 21:41:06.672322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.914 [2024-04-24 21:41:06.672333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.914 [2024-04-24 21:41:06.672521] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.914 [2024-04-24 21:41:06.672694] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.914 [2024-04-24 21:41:06.672706] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.914 [2024-04-24 21:41:06.672716] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.914 [2024-04-24 21:41:06.675367] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.914 [2024-04-24 21:41:06.684303] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.914 [2024-04-24 21:41:06.684994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.914 [2024-04-24 21:41:06.685468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.914 [2024-04-24 21:41:06.685489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.914 [2024-04-24 21:41:06.685500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.914 [2024-04-24 21:41:06.685673] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.914 [2024-04-24 21:41:06.685843] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.914 [2024-04-24 21:41:06.685855] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.914 [2024-04-24 21:41:06.685864] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.914 [2024-04-24 21:41:06.688516] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.914 [2024-04-24 21:41:06.697270] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.914 [2024-04-24 21:41:06.697855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.914 [2024-04-24 21:41:06.698258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.914 [2024-04-24 21:41:06.698272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.914 [2024-04-24 21:41:06.698283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.914 [2024-04-24 21:41:06.698459] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.914 [2024-04-24 21:41:06.698631] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.914 [2024-04-24 21:41:06.698643] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.914 [2024-04-24 21:41:06.698653] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.914 [2024-04-24 21:41:06.701267] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.914 [2024-04-24 21:41:06.704470] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:43.914 [2024-04-24 21:41:06.704499] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:43.914 [2024-04-24 21:41:06.704509] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:43.914 [2024-04-24 21:41:06.704528] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:43.914 [2024-04-24 21:41:06.704535] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:43.914 [2024-04-24 21:41:06.704581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:43.914 [2024-04-24 21:41:06.704669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:43.914 [2024-04-24 21:41:06.704671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.914 [2024-04-24 21:41:06.710209] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.914 [2024-04-24 21:41:06.710891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.914 [2024-04-24 21:41:06.711366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.914 [2024-04-24 21:41:06.711380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.914 [2024-04-24 21:41:06.711391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.914 [2024-04-24 21:41:06.711568] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.914 [2024-04-24 21:41:06.711740] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.914 [2024-04-24 21:41:06.711756] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.914 [2024-04-24 21:41:06.711766] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.914 [2024-04-24 21:41:06.714418] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.914 [2024-04-24 21:41:06.723192] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.914 [2024-04-24 21:41:06.723876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.914 [2024-04-24 21:41:06.724355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.914 [2024-04-24 21:41:06.724369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.914 [2024-04-24 21:41:06.724381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.914 [2024-04-24 21:41:06.724555] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.914 [2024-04-24 21:41:06.724726] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.914 [2024-04-24 21:41:06.724737] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.914 [2024-04-24 21:41:06.724747] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.914 [2024-04-24 21:41:06.727404] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.914 [2024-04-24 21:41:06.736210] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.914 [2024-04-24 21:41:06.736897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.914 [2024-04-24 21:41:06.737328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.914 [2024-04-24 21:41:06.737342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.914 [2024-04-24 21:41:06.737353] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.914 [2024-04-24 21:41:06.737529] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.914 [2024-04-24 21:41:06.737701] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.914 [2024-04-24 21:41:06.737713] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.914 [2024-04-24 21:41:06.737723] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.914 [2024-04-24 21:41:06.740373] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.914 [2024-04-24 21:41:06.749138] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.914 [2024-04-24 21:41:06.749821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.914 [2024-04-24 21:41:06.750172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.914 [2024-04-24 21:41:06.750186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.914 [2024-04-24 21:41:06.750196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.914 [2024-04-24 21:41:06.750368] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.914 [2024-04-24 21:41:06.750545] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.914 [2024-04-24 21:41:06.750556] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.914 [2024-04-24 21:41:06.750572] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.914 [2024-04-24 21:41:06.753226] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.914 [2024-04-24 21:41:06.762002] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.914 [2024-04-24 21:41:06.762621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.914 [2024-04-24 21:41:06.763076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.914 [2024-04-24 21:41:06.763090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.914 [2024-04-24 21:41:06.763101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.914 [2024-04-24 21:41:06.763271] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.914 [2024-04-24 21:41:06.763442] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.914 [2024-04-24 21:41:06.763458] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.914 [2024-04-24 21:41:06.763468] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.914 [2024-04-24 21:41:06.766112] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.914 [2024-04-24 21:41:06.774874] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.914 [2024-04-24 21:41:06.775555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.914 [2024-04-24 21:41:06.775956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.914 [2024-04-24 21:41:06.775970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.915 [2024-04-24 21:41:06.775980] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.915 [2024-04-24 21:41:06.776150] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.915 [2024-04-24 21:41:06.776320] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.915 [2024-04-24 21:41:06.776331] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.915 [2024-04-24 21:41:06.776341] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.915 [2024-04-24 21:41:06.778995] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.915 [2024-04-24 21:41:06.787763] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.915 [2024-04-24 21:41:06.788431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.915 [2024-04-24 21:41:06.788889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.915 [2024-04-24 21:41:06.788903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:43.915 [2024-04-24 21:41:06.788914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:43.915 [2024-04-24 21:41:06.789084] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:43.915 [2024-04-24 21:41:06.789259] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.915 [2024-04-24 21:41:06.789271] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.915 [2024-04-24 21:41:06.789280] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.915 [2024-04-24 21:41:06.791934] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.174 [2024-04-24 21:41:06.800799] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.174 [2024-04-24 21:41:06.801477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.174 [2024-04-24 21:41:06.801953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.174 [2024-04-24 21:41:06.801966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.174 [2024-04-24 21:41:06.801976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.174 [2024-04-24 21:41:06.802148] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.174 [2024-04-24 21:41:06.802319] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.174 [2024-04-24 21:41:06.802330] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.174 [2024-04-24 21:41:06.802340] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.174 [2024-04-24 21:41:06.805026] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.174 [2024-04-24 21:41:06.813809] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.174 [2024-04-24 21:41:06.814474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.174 [2024-04-24 21:41:06.814900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.174 [2024-04-24 21:41:06.814914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.174 [2024-04-24 21:41:06.814924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.174 [2024-04-24 21:41:06.815093] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.174 [2024-04-24 21:41:06.815266] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.174 [2024-04-24 21:41:06.815278] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.174 [2024-04-24 21:41:06.815287] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.174 [2024-04-24 21:41:06.817938] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.174 [2024-04-24 21:41:06.826689] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.174 [2024-04-24 21:41:06.827299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.174 [2024-04-24 21:41:06.827696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.174 [2024-04-24 21:41:06.827710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.174 [2024-04-24 21:41:06.827720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.174 [2024-04-24 21:41:06.827889] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.174 [2024-04-24 21:41:06.828058] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.174 [2024-04-24 21:41:06.828070] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.174 [2024-04-24 21:41:06.828079] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.174 [2024-04-24 21:41:06.830735] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.174 [2024-04-24 21:41:06.839649] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.174 [2024-04-24 21:41:06.840338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.174 [2024-04-24 21:41:06.840754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.174 [2024-04-24 21:41:06.840767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.174 [2024-04-24 21:41:06.840777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.174 [2024-04-24 21:41:06.840947] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.174 [2024-04-24 21:41:06.841117] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.174 [2024-04-24 21:41:06.841128] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.174 [2024-04-24 21:41:06.841138] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.174 [2024-04-24 21:41:06.843810] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.174 [2024-04-24 21:41:06.852587] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.174 [2024-04-24 21:41:06.853193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.174 [2024-04-24 21:41:06.853574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.174 [2024-04-24 21:41:06.853588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.174 [2024-04-24 21:41:06.853598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.174 [2024-04-24 21:41:06.853768] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.174 [2024-04-24 21:41:06.853939] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.174 [2024-04-24 21:41:06.853950] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.174 [2024-04-24 21:41:06.853959] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.174 [2024-04-24 21:41:06.856614] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.174 [2024-04-24 21:41:06.865522] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.174 [2024-04-24 21:41:06.866194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.174 [2024-04-24 21:41:06.866615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.174 [2024-04-24 21:41:06.866629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.174 [2024-04-24 21:41:06.866639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.174 [2024-04-24 21:41:06.866808] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.174 [2024-04-24 21:41:06.866979] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.174 [2024-04-24 21:41:06.866990] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.174 [2024-04-24 21:41:06.867000] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.174 [2024-04-24 21:41:06.869652] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.174 [2024-04-24 21:41:06.878418] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.174 [2024-04-24 21:41:06.879097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.174 [2024-04-24 21:41:06.879575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.174 [2024-04-24 21:41:06.879589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.174 [2024-04-24 21:41:06.879598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.175 [2024-04-24 21:41:06.879769] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.175 [2024-04-24 21:41:06.879939] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.175 [2024-04-24 21:41:06.879950] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.175 [2024-04-24 21:41:06.879959] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.175 [2024-04-24 21:41:06.882607] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.175 [2024-04-24 21:41:06.891378] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.175 [2024-04-24 21:41:06.892052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:06.892539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:06.892553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.175 [2024-04-24 21:41:06.892564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.175 [2024-04-24 21:41:06.892731] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.175 [2024-04-24 21:41:06.892897] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.175 [2024-04-24 21:41:06.892908] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.175 [2024-04-24 21:41:06.892917] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.175 [2024-04-24 21:41:06.895564] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.175 [2024-04-24 21:41:06.904311] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.175 [2024-04-24 21:41:06.904968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:06.905438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:06.905456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.175 [2024-04-24 21:41:06.905467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.175 [2024-04-24 21:41:06.905636] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.175 [2024-04-24 21:41:06.905806] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.175 [2024-04-24 21:41:06.905818] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.175 [2024-04-24 21:41:06.905827] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.175 [2024-04-24 21:41:06.908473] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.175 [2024-04-24 21:41:06.917215] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.175 [2024-04-24 21:41:06.917885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:06.918361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:06.918377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.175 [2024-04-24 21:41:06.918386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.175 [2024-04-24 21:41:06.918558] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.175 [2024-04-24 21:41:06.918728] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.175 [2024-04-24 21:41:06.918739] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.175 [2024-04-24 21:41:06.918748] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.175 [2024-04-24 21:41:06.921401] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.175 [2024-04-24 21:41:06.930156] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.175 [2024-04-24 21:41:06.930827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:06.931303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:06.931316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.175 [2024-04-24 21:41:06.931326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.175 [2024-04-24 21:41:06.931499] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.175 [2024-04-24 21:41:06.931669] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.175 [2024-04-24 21:41:06.931681] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.175 [2024-04-24 21:41:06.931689] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.175 [2024-04-24 21:41:06.934328] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.175 [2024-04-24 21:41:06.943087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.175 [2024-04-24 21:41:06.943757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:06.944231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:06.944244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.175 [2024-04-24 21:41:06.944254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.175 [2024-04-24 21:41:06.944424] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.175 [2024-04-24 21:41:06.944598] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.175 [2024-04-24 21:41:06.944610] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.175 [2024-04-24 21:41:06.944619] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.175 [2024-04-24 21:41:06.947270] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.175 [2024-04-24 21:41:06.956020] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.175 [2024-04-24 21:41:06.956684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:06.957156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:06.957169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.175 [2024-04-24 21:41:06.957182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.175 [2024-04-24 21:41:06.957351] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.175 [2024-04-24 21:41:06.957525] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.175 [2024-04-24 21:41:06.957536] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.175 [2024-04-24 21:41:06.957545] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.175 [2024-04-24 21:41:06.960232] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.175 [2024-04-24 21:41:06.968988] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.175 [2024-04-24 21:41:06.969637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:06.970113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:06.970126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.175 [2024-04-24 21:41:06.970136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.175 [2024-04-24 21:41:06.970305] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.175 [2024-04-24 21:41:06.970479] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.175 [2024-04-24 21:41:06.970491] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.175 [2024-04-24 21:41:06.970500] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.175 [2024-04-24 21:41:06.973148] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.175 [2024-04-24 21:41:06.981907] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.175 [2024-04-24 21:41:06.982574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:06.983026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:06.983039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.175 [2024-04-24 21:41:06.983049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.175 [2024-04-24 21:41:06.983219] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.175 [2024-04-24 21:41:06.983388] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.175 [2024-04-24 21:41:06.983399] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.175 [2024-04-24 21:41:06.983407] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.175 [2024-04-24 21:41:06.986054] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.175 [2024-04-24 21:41:06.994815] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.175 [2024-04-24 21:41:06.995470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:06.995945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:06.995959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.175 [2024-04-24 21:41:06.995969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.175 [2024-04-24 21:41:06.996142] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.175 [2024-04-24 21:41:06.996312] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.175 [2024-04-24 21:41:06.996324] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.175 [2024-04-24 21:41:06.996333] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.175 [2024-04-24 21:41:06.998987] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.175 [2024-04-24 21:41:07.007737] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.175 [2024-04-24 21:41:07.008404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:07.008793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:07.008807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.175 [2024-04-24 21:41:07.008817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.175 [2024-04-24 21:41:07.008987] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.175 [2024-04-24 21:41:07.009157] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.175 [2024-04-24 21:41:07.009169] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.175 [2024-04-24 21:41:07.009178] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.175 [2024-04-24 21:41:07.011833] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.175 [2024-04-24 21:41:07.020587] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.175 [2024-04-24 21:41:07.021257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:07.021732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:07.021746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.175 [2024-04-24 21:41:07.021756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.175 [2024-04-24 21:41:07.021926] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.175 [2024-04-24 21:41:07.022096] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.175 [2024-04-24 21:41:07.022107] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.175 [2024-04-24 21:41:07.022116] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.175 [2024-04-24 21:41:07.024767] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.175 [2024-04-24 21:41:07.033530] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.175 [2024-04-24 21:41:07.034117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:07.034569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:07.034583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.175 [2024-04-24 21:41:07.034593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.175 [2024-04-24 21:41:07.034764] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.175 [2024-04-24 21:41:07.034936] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.175 [2024-04-24 21:41:07.034947] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.175 [2024-04-24 21:41:07.034957] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.175 [2024-04-24 21:41:07.037604] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.175 [2024-04-24 21:41:07.046517] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.175 [2024-04-24 21:41:07.046901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:07.047377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.175 [2024-04-24 21:41:07.047390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.175 [2024-04-24 21:41:07.047400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.175 [2024-04-24 21:41:07.047576] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.175 [2024-04-24 21:41:07.047746] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.175 [2024-04-24 21:41:07.047758] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.175 [2024-04-24 21:41:07.047767] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.175 [2024-04-24 21:41:07.050419] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.175 [2024-04-24 21:41:07.059556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.435 [2024-04-24 21:41:07.060241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.435 [2024-04-24 21:41:07.060724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.435 [2024-04-24 21:41:07.060742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.435 [2024-04-24 21:41:07.060753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.435 [2024-04-24 21:41:07.060929] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.435 [2024-04-24 21:41:07.061100] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.435 [2024-04-24 21:41:07.061112] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.435 [2024-04-24 21:41:07.061122] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.435 [2024-04-24 21:41:07.063780] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.435 [2024-04-24 21:41:07.072420] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.435 [2024-04-24 21:41:07.073018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.435 [2024-04-24 21:41:07.073420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.435 [2024-04-24 21:41:07.073433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.435 [2024-04-24 21:41:07.073443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.435 [2024-04-24 21:41:07.073622] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.435 [2024-04-24 21:41:07.073791] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.435 [2024-04-24 21:41:07.073806] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.435 [2024-04-24 21:41:07.073816] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.435 [2024-04-24 21:41:07.076472] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.435 [2024-04-24 21:41:07.085398] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.435 [2024-04-24 21:41:07.086074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.435 [2024-04-24 21:41:07.086506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.435 [2024-04-24 21:41:07.086520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.435 [2024-04-24 21:41:07.086530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.435 [2024-04-24 21:41:07.086700] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.435 [2024-04-24 21:41:07.086869] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.435 [2024-04-24 21:41:07.086880] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.435 [2024-04-24 21:41:07.086889] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.435 [2024-04-24 21:41:07.089542] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.435 [2024-04-24 21:41:07.098314] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.435 [2024-04-24 21:41:07.098716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.435 [2024-04-24 21:41:07.099132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.435 [2024-04-24 21:41:07.099145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.435 [2024-04-24 21:41:07.099155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.435 [2024-04-24 21:41:07.099324] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.435 [2024-04-24 21:41:07.099501] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.435 [2024-04-24 21:41:07.099513] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.435 [2024-04-24 21:41:07.099522] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.435 [2024-04-24 21:41:07.102175] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.435 [2024-04-24 21:41:07.111267] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.435 [2024-04-24 21:41:07.111938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.435 [2024-04-24 21:41:07.112390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.435 [2024-04-24 21:41:07.112404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.435 [2024-04-24 21:41:07.112414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.435 [2024-04-24 21:41:07.112589] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.435 [2024-04-24 21:41:07.112759] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.435 [2024-04-24 21:41:07.112771] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.435 [2024-04-24 21:41:07.112783] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.435 [2024-04-24 21:41:07.115438] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.435 [2024-04-24 21:41:07.124189] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.435 [2024-04-24 21:41:07.124856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.435 [2024-04-24 21:41:07.125286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.435 [2024-04-24 21:41:07.125299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.435 [2024-04-24 21:41:07.125309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.435 [2024-04-24 21:41:07.125483] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.435 [2024-04-24 21:41:07.125654] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.435 [2024-04-24 21:41:07.125665] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.435 [2024-04-24 21:41:07.125674] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.435 [2024-04-24 21:41:07.128326] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.435 [2024-04-24 21:41:07.137100] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.435 [2024-04-24 21:41:07.137771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.435 [2024-04-24 21:41:07.138244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.435 [2024-04-24 21:41:07.138257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.435 [2024-04-24 21:41:07.138267] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.435 [2024-04-24 21:41:07.138436] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.435 [2024-04-24 21:41:07.138610] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.435 [2024-04-24 21:41:07.138622] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.435 [2024-04-24 21:41:07.138631] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.435 [2024-04-24 21:41:07.141283] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.435 [2024-04-24 21:41:07.150063] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.435 [2024-04-24 21:41:07.150669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.435 [2024-04-24 21:41:07.151144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.435 [2024-04-24 21:41:07.151158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.435 [2024-04-24 21:41:07.151167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.435 [2024-04-24 21:41:07.151336] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.435 [2024-04-24 21:41:07.151510] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.435 [2024-04-24 21:41:07.151522] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.435 [2024-04-24 21:41:07.151531] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.435 [2024-04-24 21:41:07.154188] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.436 [2024-04-24 21:41:07.162954] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.436 [2024-04-24 21:41:07.163556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.436 [2024-04-24 21:41:07.163950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.436 [2024-04-24 21:41:07.163963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.436 [2024-04-24 21:41:07.163973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.436 [2024-04-24 21:41:07.164142] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.436 [2024-04-24 21:41:07.164311] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.436 [2024-04-24 21:41:07.164323] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.436 [2024-04-24 21:41:07.164332] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.436 [2024-04-24 21:41:07.166982] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.436 [2024-04-24 21:41:07.175912] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.436 [2024-04-24 21:41:07.176581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.436 [2024-04-24 21:41:07.177082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.436 [2024-04-24 21:41:07.177095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.436 [2024-04-24 21:41:07.177105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.436 [2024-04-24 21:41:07.177275] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.436 [2024-04-24 21:41:07.177447] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.436 [2024-04-24 21:41:07.177461] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.436 [2024-04-24 21:41:07.177470] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.436 [2024-04-24 21:41:07.180123] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.436 [2024-04-24 21:41:07.188880] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.436 [2024-04-24 21:41:07.189553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.436 [2024-04-24 21:41:07.189988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.436 [2024-04-24 21:41:07.190000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.436 [2024-04-24 21:41:07.190010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.436 [2024-04-24 21:41:07.190179] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.436 [2024-04-24 21:41:07.190349] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.436 [2024-04-24 21:41:07.190360] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.436 [2024-04-24 21:41:07.190369] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.436 [2024-04-24 21:41:07.193017] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.436 [2024-04-24 21:41:07.201800] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.436 [2024-04-24 21:41:07.202424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.436 [2024-04-24 21:41:07.202895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.436 [2024-04-24 21:41:07.202908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.436 [2024-04-24 21:41:07.202918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.436 [2024-04-24 21:41:07.203086] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.436 [2024-04-24 21:41:07.203255] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.436 [2024-04-24 21:41:07.203266] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.436 [2024-04-24 21:41:07.203276] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.436 [2024-04-24 21:41:07.205934] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.436 [2024-04-24 21:41:07.214713] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.436 [2024-04-24 21:41:07.215274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.436 [2024-04-24 21:41:07.215709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.436 [2024-04-24 21:41:07.215723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.436 [2024-04-24 21:41:07.215732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.436 [2024-04-24 21:41:07.215901] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.436 [2024-04-24 21:41:07.216071] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.436 [2024-04-24 21:41:07.216082] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.436 [2024-04-24 21:41:07.216091] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.436 [2024-04-24 21:41:07.218741] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.436 [2024-04-24 21:41:07.227659] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.436 [2024-04-24 21:41:07.228261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.436 [2024-04-24 21:41:07.228740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.436 [2024-04-24 21:41:07.228754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.436 [2024-04-24 21:41:07.228764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.436 [2024-04-24 21:41:07.228934] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.436 [2024-04-24 21:41:07.229103] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.436 [2024-04-24 21:41:07.229113] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.436 [2024-04-24 21:41:07.229123] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.436 [2024-04-24 21:41:07.231779] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.436 [2024-04-24 21:41:07.240553] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.436 [2024-04-24 21:41:07.241150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.436 [2024-04-24 21:41:07.241485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.436 [2024-04-24 21:41:07.241498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.436 [2024-04-24 21:41:07.241508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.436 [2024-04-24 21:41:07.241679] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.436 [2024-04-24 21:41:07.241850] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.436 [2024-04-24 21:41:07.241860] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.436 [2024-04-24 21:41:07.241869] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.436 [2024-04-24 21:41:07.244530] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.436 [2024-04-24 21:41:07.253461] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.436 [2024-04-24 21:41:07.254128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.436 [2024-04-24 21:41:07.254600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.436 [2024-04-24 21:41:07.254613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.436 [2024-04-24 21:41:07.254623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.436 [2024-04-24 21:41:07.254792] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.436 [2024-04-24 21:41:07.254961] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.436 [2024-04-24 21:41:07.254972] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.436 [2024-04-24 21:41:07.254981] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.436 [2024-04-24 21:41:07.257645] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.436 [2024-04-24 21:41:07.266429] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.436 [2024-04-24 21:41:07.267083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.436 [2024-04-24 21:41:07.267509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.436 [2024-04-24 21:41:07.267522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.436 [2024-04-24 21:41:07.267532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.436 [2024-04-24 21:41:07.267701] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.436 [2024-04-24 21:41:07.267870] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.436 [2024-04-24 21:41:07.267881] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.436 [2024-04-24 21:41:07.267890] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.436 [2024-04-24 21:41:07.270549] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.436 [2024-04-24 21:41:07.279324] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.436 [2024-04-24 21:41:07.279996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.436 [2024-04-24 21:41:07.280469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.437 [2024-04-24 21:41:07.280487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.437 [2024-04-24 21:41:07.280497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.437 [2024-04-24 21:41:07.280667] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.437 [2024-04-24 21:41:07.280837] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.437 [2024-04-24 21:41:07.280847] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.437 [2024-04-24 21:41:07.280855] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.437 [2024-04-24 21:41:07.283510] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.437 [2024-04-24 21:41:07.292265] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.437 [2024-04-24 21:41:07.292845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.437 [2024-04-24 21:41:07.293237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.437 [2024-04-24 21:41:07.293249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.437 [2024-04-24 21:41:07.293259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.437 [2024-04-24 21:41:07.293428] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.437 [2024-04-24 21:41:07.293602] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.437 [2024-04-24 21:41:07.293613] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.437 [2024-04-24 21:41:07.293623] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.437 [2024-04-24 21:41:07.296271] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.437 [2024-04-24 21:41:07.305193] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.437 [2024-04-24 21:41:07.305723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.437 [2024-04-24 21:41:07.306198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.437 [2024-04-24 21:41:07.306210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.437 [2024-04-24 21:41:07.306220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.437 [2024-04-24 21:41:07.306389] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.437 [2024-04-24 21:41:07.306565] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.437 [2024-04-24 21:41:07.306576] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.437 [2024-04-24 21:41:07.306585] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.437 [2024-04-24 21:41:07.309238] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.437 [2024-04-24 21:41:07.318204] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.437 [2024-04-24 21:41:07.318824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.437 [2024-04-24 21:41:07.319230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.437 [2024-04-24 21:41:07.319243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.437 [2024-04-24 21:41:07.319256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.437 [2024-04-24 21:41:07.319427] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.437 [2024-04-24 21:41:07.319602] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.437 [2024-04-24 21:41:07.319614] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.437 [2024-04-24 21:41:07.319622] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.696 [2024-04-24 21:41:07.322314] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.696 [2024-04-24 21:41:07.331121] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.696 [2024-04-24 21:41:07.331751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.696 [2024-04-24 21:41:07.332158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.696 [2024-04-24 21:41:07.332171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.696 [2024-04-24 21:41:07.332181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.696 [2024-04-24 21:41:07.332351] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.696 [2024-04-24 21:41:07.332524] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.696 [2024-04-24 21:41:07.332535] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.696 [2024-04-24 21:41:07.332544] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.696 [2024-04-24 21:41:07.335193] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.696 [2024-04-24 21:41:07.344130] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.696 [2024-04-24 21:41:07.344787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.696 [2024-04-24 21:41:07.345140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.696 [2024-04-24 21:41:07.345152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.696 [2024-04-24 21:41:07.345162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.696 [2024-04-24 21:41:07.345331] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.696 [2024-04-24 21:41:07.345504] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.696 [2024-04-24 21:41:07.345515] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.696 [2024-04-24 21:41:07.345524] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.696 [2024-04-24 21:41:07.348175] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.696 [2024-04-24 21:41:07.357090] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.696 [2024-04-24 21:41:07.357690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.697 [2024-04-24 21:41:07.358116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.697 [2024-04-24 21:41:07.358128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.697 [2024-04-24 21:41:07.358138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.697 [2024-04-24 21:41:07.358310] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.697 [2024-04-24 21:41:07.358484] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.697 [2024-04-24 21:41:07.358495] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.697 [2024-04-24 21:41:07.358504] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.697 [2024-04-24 21:41:07.361147] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.697 21:41:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:44.697 21:41:07 -- common/autotest_common.sh@850 -- # return 0 00:25:44.697 21:41:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:44.697 21:41:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:44.697 21:41:07 -- common/autotest_common.sh@10 -- # set +x 00:25:44.697 [2024-04-24 21:41:07.370085] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.697 [2024-04-24 21:41:07.370686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.697 [2024-04-24 21:41:07.371102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.697 [2024-04-24 21:41:07.371115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.697 [2024-04-24 21:41:07.371124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.697 [2024-04-24 21:41:07.371293] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.697 [2024-04-24 21:41:07.371468] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.697 [2024-04-24 21:41:07.371479] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.697 [2024-04-24 21:41:07.371488] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.697 [2024-04-24 21:41:07.374132] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.697 [2024-04-24 21:41:07.383059] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.697 [2024-04-24 21:41:07.383728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.697 [2024-04-24 21:41:07.384078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.697 [2024-04-24 21:41:07.384091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.697 [2024-04-24 21:41:07.384101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.697 [2024-04-24 21:41:07.384270] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.697 [2024-04-24 21:41:07.384439] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.697 [2024-04-24 21:41:07.384455] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.697 [2024-04-24 21:41:07.384464] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.697 [2024-04-24 21:41:07.387115] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.697 [2024-04-24 21:41:07.396050] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.697 [2024-04-24 21:41:07.396647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.697 [2024-04-24 21:41:07.396995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.697 [2024-04-24 21:41:07.397008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.697 [2024-04-24 21:41:07.397021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.697 [2024-04-24 21:41:07.397190] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.697 [2024-04-24 21:41:07.397360] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.697 [2024-04-24 21:41:07.397371] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.697 [2024-04-24 21:41:07.397380] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.697 [2024-04-24 21:41:07.400033] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.697 [2024-04-24 21:41:07.409004] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.697 [2024-04-24 21:41:07.409608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.697 [2024-04-24 21:41:07.409965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.697 [2024-04-24 21:41:07.409977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.697 [2024-04-24 21:41:07.409987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.697 [2024-04-24 21:41:07.410156] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.697 [2024-04-24 21:41:07.410325] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.697 [2024-04-24 21:41:07.410335] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.697 [2024-04-24 21:41:07.410344] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.697 21:41:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:44.697 21:41:07 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:44.697 21:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.697 21:41:07 -- common/autotest_common.sh@10 -- # set +x 00:25:44.697 [2024-04-24 21:41:07.413001] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.697 [2024-04-24 21:41:07.415767] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:44.697 21:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.697 21:41:07 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:44.697 [2024-04-24 21:41:07.421911] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.697 21:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.697 21:41:07 -- common/autotest_common.sh@10 -- # set +x 00:25:44.697 [2024-04-24 21:41:07.422516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.697 [2024-04-24 21:41:07.422918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.697 [2024-04-24 21:41:07.422931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.697 [2024-04-24 21:41:07.422940] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.697 [2024-04-24 21:41:07.423109] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.697 [2024-04-24 21:41:07.423278] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.697 [2024-04-24 21:41:07.423288] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.697 [2024-04-24 21:41:07.423297] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.697 [2024-04-24 21:41:07.425952] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.697 [2024-04-24 21:41:07.434865] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.697 [2024-04-24 21:41:07.435534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.697 [2024-04-24 21:41:07.435994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.697 [2024-04-24 21:41:07.436006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.697 [2024-04-24 21:41:07.436016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.697 [2024-04-24 21:41:07.436184] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.697 [2024-04-24 21:41:07.436354] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.697 [2024-04-24 21:41:07.436364] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.697 [2024-04-24 21:41:07.436373] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.697 [2024-04-24 21:41:07.439025] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.698 [2024-04-24 21:41:07.447798] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.698 [2024-04-24 21:41:07.448419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.698 [2024-04-24 21:41:07.448894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.698 [2024-04-24 21:41:07.448907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.698 [2024-04-24 21:41:07.448918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.698 [2024-04-24 21:41:07.449087] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.698 [2024-04-24 21:41:07.449256] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.698 [2024-04-24 21:41:07.449267] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.698 [2024-04-24 21:41:07.449277] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.698 [2024-04-24 21:41:07.451935] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.698 Malloc0 00:25:44.698 21:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.698 21:41:07 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:44.698 21:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.698 21:41:07 -- common/autotest_common.sh@10 -- # set +x 00:25:44.698 [2024-04-24 21:41:07.460704] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.698 [2024-04-24 21:41:07.461382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.698 [2024-04-24 21:41:07.461886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.698 [2024-04-24 21:41:07.461899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.698 [2024-04-24 21:41:07.461909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.698 [2024-04-24 21:41:07.462078] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.698 [2024-04-24 21:41:07.462248] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.698 [2024-04-24 21:41:07.462259] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.698 [2024-04-24 21:41:07.462271] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.698 [2024-04-24 21:41:07.464922] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.698 21:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.698 21:41:07 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:44.698 21:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.698 21:41:07 -- common/autotest_common.sh@10 -- # set +x 00:25:44.698 [2024-04-24 21:41:07.473717] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.698 21:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.698 21:41:07 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:44.698 [2024-04-24 21:41:07.474411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.698 21:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.698 21:41:07 -- common/autotest_common.sh@10 -- # set +x 00:25:44.698 [2024-04-24 21:41:07.474864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:44.698 [2024-04-24 21:41:07.474877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5ab30 with addr=10.0.0.2, port=4420 00:25:44.698 [2024-04-24 21:41:07.474887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5ab30 is same with the state(5) to be set 00:25:44.698 [2024-04-24 21:41:07.475056] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5ab30 (9): Bad file descriptor 00:25:44.698 [2024-04-24 21:41:07.475226] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:44.698 [2024-04-24 21:41:07.475236] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:44.698 [2024-04-24 21:41:07.475245] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.698 [2024-04-24 21:41:07.477369] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:44.698 [2024-04-24 21:41:07.477904] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:44.698 21:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.698 21:41:07 -- host/bdevperf.sh@38 -- # wait 2996775 00:25:44.698 [2024-04-24 21:41:07.486680] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.698 [2024-04-24 21:41:07.558276] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:54.668 00:25:54.668 Latency(us) 00:25:54.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.668 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:54.668 Verification LBA range: start 0x0 length 0x4000 00:25:54.668 Nvme1n1 : 15.01 8485.59 33.15 12635.21 0.00 6041.14 1441.79 29360.13 00:25:54.668 =================================================================================================================== 00:25:54.668 Total : 8485.59 33.15 12635.21 0.00 6041.14 1441.79 29360.13 00:25:54.668 21:41:16 -- host/bdevperf.sh@39 -- # sync 00:25:54.668 21:41:16 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:54.668 21:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:54.668 21:41:16 -- common/autotest_common.sh@10 -- # set +x 00:25:54.668 21:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:54.668 21:41:16 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:54.668 21:41:16 -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:54.668 21:41:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:54.668 21:41:16 -- nvmf/common.sh@117 -- # sync 00:25:54.668 21:41:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:54.668 21:41:16 -- nvmf/common.sh@120 -- # set +e 00:25:54.668 21:41:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:54.668 21:41:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:54.668 rmmod nvme_tcp 00:25:54.668 rmmod nvme_fabrics 00:25:54.668 rmmod nvme_keyring 00:25:54.668 21:41:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:54.668 21:41:16 -- nvmf/common.sh@124 -- # set -e 00:25:54.668 21:41:16 -- nvmf/common.sh@125 -- # return 0 00:25:54.668 21:41:16 -- nvmf/common.sh@478 -- # '[' -n 2997844 ']' 00:25:54.668 21:41:16 -- nvmf/common.sh@479 -- # killprocess 2997844 00:25:54.668 21:41:16 -- common/autotest_common.sh@936 -- # '[' -z 2997844 ']' 00:25:54.668 21:41:16 -- common/autotest_common.sh@940 -- # kill -0 2997844 00:25:54.668 21:41:16 -- common/autotest_common.sh@941 -- # uname 00:25:54.668 21:41:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:54.668 21:41:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2997844 00:25:54.668 21:41:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:54.668 21:41:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:54.668 21:41:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2997844' 00:25:54.668 killing process with pid 2997844 00:25:54.668 21:41:16 -- common/autotest_common.sh@955 -- # kill 2997844 00:25:54.668 21:41:16 -- common/autotest_common.sh@960 -- # wait 2997844 00:25:54.668 21:41:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:54.668 21:41:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:54.668 21:41:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:54.668 21:41:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:54.668 21:41:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:54.668 21:41:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.668 21:41:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:54.668 21:41:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.080 21:41:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:56.080 00:25:56.080 real 0m27.481s 00:25:56.080 user 1m2.275s 00:25:56.080 sys 0m8.072s 00:25:56.080 21:41:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:56.080 21:41:18 -- common/autotest_common.sh@10 -- # set +x 00:25:56.080 ************************************ 00:25:56.080 END TEST nvmf_bdevperf 00:25:56.080 ************************************ 00:25:56.080 21:41:18 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:56.080 21:41:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:56.080 21:41:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:56.080 21:41:18 -- common/autotest_common.sh@10 -- # set +x 00:25:56.080 ************************************ 00:25:56.080 START TEST nvmf_target_disconnect 00:25:56.080 ************************************ 00:25:56.080 21:41:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:56.080 * Looking for test storage... 00:25:56.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:56.080 21:41:18 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.080 21:41:18 -- nvmf/common.sh@7 -- # uname -s 00:25:56.080 21:41:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.080 21:41:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.080 21:41:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.080 21:41:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.080 21:41:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.080 21:41:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.080 21:41:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.080 21:41:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.080 21:41:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.080 21:41:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.080 21:41:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:56.080 21:41:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:56.081 21:41:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.081 21:41:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.081 21:41:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.081 21:41:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.081 21:41:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.081 21:41:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.081 21:41:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.081 21:41:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.081 21:41:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.081 21:41:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.081 21:41:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.081 21:41:18 -- paths/export.sh@5 -- # export PATH 00:25:56.081 21:41:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.081 21:41:18 -- nvmf/common.sh@47 -- # : 0 00:25:56.081 21:41:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:56.081 21:41:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:56.081 21:41:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:56.081 21:41:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.081 21:41:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.081 21:41:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:56.081 21:41:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:56.081 21:41:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:56.081 21:41:18 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:56.339 21:41:18 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:56.339 21:41:18 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:56.339 21:41:18 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:25:56.339 21:41:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:56.339 21:41:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.339 21:41:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:56.339 21:41:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:56.339 21:41:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:56.339 21:41:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.339 21:41:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:56.339 21:41:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.339 21:41:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:56.339 21:41:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:56.339 21:41:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:56.339 21:41:18 -- common/autotest_common.sh@10 -- # set +x 00:26:02.897 21:41:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:02.897 21:41:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:02.897 21:41:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:02.897 21:41:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:02.897 21:41:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:02.897 21:41:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:02.897 21:41:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:02.897 21:41:25 -- nvmf/common.sh@295 -- # net_devs=() 00:26:02.897 21:41:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:02.897 21:41:25 -- nvmf/common.sh@296 -- # e810=() 00:26:02.897 21:41:25 -- nvmf/common.sh@296 -- # local -ga e810 00:26:02.897 21:41:25 -- nvmf/common.sh@297 -- # x722=() 00:26:02.897 21:41:25 -- nvmf/common.sh@297 -- # local -ga x722 00:26:02.897 21:41:25 -- nvmf/common.sh@298 -- # mlx=() 00:26:02.897 21:41:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:02.897 21:41:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.897 21:41:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.897 21:41:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.897 21:41:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.897 21:41:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.897 21:41:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.897 21:41:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.897 21:41:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.897 21:41:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.897 21:41:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.897 21:41:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.897 21:41:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:02.897 21:41:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:02.897 21:41:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:02.897 21:41:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:02.897 21:41:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:02.897 21:41:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:02.897 21:41:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:02.897 21:41:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:02.897 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:02.897 21:41:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:02.897 21:41:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:02.897 21:41:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.898 21:41:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.898 21:41:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:02.898 21:41:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:02.898 21:41:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:02.898 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:02.898 21:41:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:02.898 21:41:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:02.898 21:41:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.898 21:41:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.898 21:41:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:02.898 21:41:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:02.898 21:41:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:02.898 21:41:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:02.898 21:41:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:02.898 21:41:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.898 21:41:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:02.898 21:41:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.898 21:41:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:02.898 Found net devices under 0000:af:00.0: cvl_0_0 00:26:02.898 21:41:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.898 21:41:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:02.898 21:41:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.898 21:41:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:02.898 21:41:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.898 21:41:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:02.898 Found net devices under 0000:af:00.1: cvl_0_1 00:26:02.898 21:41:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.898 21:41:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:02.898 21:41:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:02.898 21:41:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:02.898 21:41:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:02.898 21:41:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:02.898 21:41:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.898 21:41:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.898 21:41:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.898 21:41:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:02.898 21:41:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.898 21:41:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.898 21:41:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:02.898 21:41:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.898 21:41:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.898 21:41:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:02.898 21:41:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:02.898 21:41:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.898 21:41:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.898 21:41:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.898 21:41:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.898 21:41:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:02.898 21:41:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:03.157 21:41:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:03.157 21:41:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:03.157 21:41:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:03.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:03.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:26:03.157 00:26:03.157 --- 10.0.0.2 ping statistics --- 00:26:03.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.157 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:26:03.157 21:41:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:03.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:03.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:26:03.157 00:26:03.157 --- 10.0.0.1 ping statistics --- 00:26:03.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.157 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:26:03.157 21:41:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:03.157 21:41:25 -- nvmf/common.sh@411 -- # return 0 00:26:03.157 21:41:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:03.157 21:41:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:03.157 21:41:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:03.157 21:41:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:03.157 21:41:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:03.157 21:41:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:03.157 21:41:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:03.157 21:41:25 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:03.157 21:41:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:03.157 21:41:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:03.157 21:41:25 -- common/autotest_common.sh@10 -- # set +x 00:26:03.416 ************************************ 00:26:03.416 START TEST nvmf_target_disconnect_tc1 00:26:03.416 ************************************ 00:26:03.416 21:41:26 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:26:03.416 21:41:26 -- host/target_disconnect.sh@32 -- # set +e 00:26:03.416 21:41:26 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:03.416 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.416 [2024-04-24 21:41:26.187569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.416 [2024-04-24 21:41:26.188085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.416 [2024-04-24 21:41:26.188099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bcf660 with addr=10.0.0.2, port=4420 00:26:03.416 [2024-04-24 21:41:26.188125] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:03.416 [2024-04-24 21:41:26.188141] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:03.416 [2024-04-24 21:41:26.188151] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:03.416 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:03.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:03.416 Initializing NVMe Controllers 00:26:03.416 21:41:26 -- host/target_disconnect.sh@33 -- # trap - ERR 00:26:03.416 21:41:26 -- host/target_disconnect.sh@33 -- # print_backtrace 00:26:03.416 21:41:26 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:26:03.416 21:41:26 -- common/autotest_common.sh@1139 -- # return 0 00:26:03.416 21:41:26 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:26:03.416 21:41:26 -- host/target_disconnect.sh@41 -- # set -e 00:26:03.416 00:26:03.416 real 0m0.116s 00:26:03.416 user 0m0.046s 00:26:03.416 sys 0m0.065s 00:26:03.416 21:41:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:03.416 21:41:26 -- common/autotest_common.sh@10 -- # set +x 00:26:03.416 ************************************ 00:26:03.416 END TEST nvmf_target_disconnect_tc1 00:26:03.416 ************************************ 00:26:03.416 21:41:26 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:03.416 21:41:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:03.416 21:41:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:03.416 21:41:26 -- common/autotest_common.sh@10 -- # set +x 00:26:03.675 ************************************ 00:26:03.675 START TEST nvmf_target_disconnect_tc2 00:26:03.675 ************************************ 00:26:03.675 21:41:26 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:26:03.675 21:41:26 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:26:03.675 21:41:26 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:03.675 21:41:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:03.675 21:41:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:03.675 21:41:26 -- common/autotest_common.sh@10 -- # set +x 00:26:03.675 21:41:26 -- nvmf/common.sh@470 -- # nvmfpid=3003262 00:26:03.675 21:41:26 -- nvmf/common.sh@471 -- # waitforlisten 3003262 00:26:03.675 21:41:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:03.675 21:41:26 -- common/autotest_common.sh@817 -- # '[' -z 3003262 ']' 00:26:03.675 21:41:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.675 21:41:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:03.675 21:41:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.675 21:41:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:03.675 21:41:26 -- common/autotest_common.sh@10 -- # set +x 00:26:03.675 [2024-04-24 21:41:26.461404] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:26:03.675 [2024-04-24 21:41:26.461444] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.675 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.675 [2024-04-24 21:41:26.547633] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:03.934 [2024-04-24 21:41:26.620852] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.934 [2024-04-24 21:41:26.620894] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.934 [2024-04-24 21:41:26.620903] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.934 [2024-04-24 21:41:26.620911] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.934 [2024-04-24 21:41:26.620918] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.934 [2024-04-24 21:41:26.621446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:03.934 [2024-04-24 21:41:26.621550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:03.934 [2024-04-24 21:41:26.621660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:03.934 [2024-04-24 21:41:26.621661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:04.501 21:41:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:04.501 21:41:27 -- common/autotest_common.sh@850 -- # return 0 00:26:04.501 21:41:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:04.501 21:41:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:04.502 21:41:27 -- common/autotest_common.sh@10 -- # set +x 00:26:04.502 21:41:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.502 21:41:27 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:04.502 21:41:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.502 21:41:27 -- common/autotest_common.sh@10 -- # set +x 00:26:04.502 Malloc0 00:26:04.502 21:41:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.502 21:41:27 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:04.502 21:41:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.502 21:41:27 -- common/autotest_common.sh@10 -- # set +x 00:26:04.502 [2024-04-24 21:41:27.333557] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.502 21:41:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.502 21:41:27 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:04.502 21:41:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.502 21:41:27 -- common/autotest_common.sh@10 -- # set +x 00:26:04.502 21:41:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.502 21:41:27 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:04.502 21:41:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.502 21:41:27 -- common/autotest_common.sh@10 -- # set +x 00:26:04.502 21:41:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.502 21:41:27 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:04.502 21:41:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.502 21:41:27 -- common/autotest_common.sh@10 -- # set +x 00:26:04.502 [2024-04-24 21:41:27.365843] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.502 21:41:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.502 21:41:27 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:04.502 21:41:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.502 21:41:27 -- common/autotest_common.sh@10 -- # set +x 00:26:04.502 21:41:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.502 21:41:27 -- host/target_disconnect.sh@50 -- # reconnectpid=3003470 00:26:04.502 21:41:27 -- host/target_disconnect.sh@52 -- # sleep 2 00:26:04.502 21:41:27 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:04.761 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.666 21:41:29 -- host/target_disconnect.sh@53 -- # kill -9 3003262 00:26:06.666 21:41:29 -- host/target_disconnect.sh@55 -- # sleep 2 00:26:06.666 Read completed with error (sct=0, sc=8) 00:26:06.666 starting I/O failed 00:26:06.666 Read completed with error (sct=0, sc=8) 00:26:06.666 starting I/O failed 00:26:06.666 Read completed with error (sct=0, sc=8) 00:26:06.666 starting I/O failed 00:26:06.666 Read completed with error (sct=0, sc=8) 00:26:06.666 starting I/O failed 00:26:06.666 Read completed with error (sct=0, sc=8) 00:26:06.666 starting I/O failed 00:26:06.666 Read completed with error (sct=0, sc=8) 00:26:06.666 starting I/O failed 00:26:06.666 Read completed with error (sct=0, sc=8) 00:26:06.666 starting I/O failed 00:26:06.666 Read completed with error (sct=0, sc=8) 00:26:06.666 starting I/O failed 00:26:06.666 Read completed with error (sct=0, sc=8) 00:26:06.666 starting I/O failed 00:26:06.666 Write completed with error (sct=0, sc=8) 00:26:06.666 starting I/O failed 00:26:06.666 Write completed with error (sct=0, sc=8) 00:26:06.666 starting I/O failed 00:26:06.666 Read completed with error (sct=0, sc=8) 00:26:06.666 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 [2024-04-24 21:41:29.395379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 [2024-04-24 21:41:29.395617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 [2024-04-24 21:41:29.395839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Write completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.667 Read completed with error (sct=0, sc=8) 00:26:06.667 starting I/O failed 00:26:06.668 Write completed with error (sct=0, sc=8) 00:26:06.668 starting I/O failed 00:26:06.668 Read completed with error (sct=0, sc=8) 00:26:06.668 starting I/O failed 00:26:06.668 [2024-04-24 21:41:29.396057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:06.668 [2024-04-24 21:41:29.396477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.396946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.396988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.397515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.398006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.398046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.398522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.398955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.399000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.399476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.399881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.399919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.400304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.400673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.400685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.401079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.401467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.401505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.402045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.402497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.402536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.403065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.403511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.403551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.404064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.404440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.404488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.404863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.405385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.405423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.405723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.406173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.406211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.406742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.407177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.407188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.407656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.408178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.408216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.408768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.409148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.409187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.409646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.410064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.410101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.410581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.410904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.410915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.411384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.411779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.411791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.412164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.412620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.412632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.413085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.413553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.413565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.413909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.414358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.414369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.414835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.415207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.415219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.415692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.415914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.415926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.416310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.416754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.416792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.417273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.417727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.417766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.418220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.418717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.418756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.419123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.419517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.419528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.419926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.420387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.420425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.668 qpair failed and we were unable to recover it. 00:26:06.668 [2024-04-24 21:41:29.420841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.668 [2024-04-24 21:41:29.421076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.421087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.421500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.421840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.421879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.422402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.422933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.422974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.423480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.424001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.424038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.424550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.425012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.425050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.425490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.426033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.426071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.426510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.427034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.427072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.427466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.427982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.428020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.428492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.428919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.428957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.429222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.429578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.429615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.430142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.430594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.430632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.431067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.431566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.431604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.432117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.432477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.432515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.433034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.433468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.433507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.433871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.434302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.434340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.434795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.435207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.435218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.435601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.436055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.436093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.436646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.437164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.437202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.437758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.438300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.438338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.438794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.439234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.439245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.439725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.440172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.440209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.440744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.441187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.441199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.441668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.442071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.442082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.442493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.442867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.442901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.443369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.443842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.443880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.444386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.444863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.444901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.445438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.445953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.445991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.446547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.446994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.447032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.669 [2024-04-24 21:41:29.447615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.448114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.669 [2024-04-24 21:41:29.448153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.669 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.448702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.449199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.449237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.449788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.450167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.450204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.450683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.451160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.451197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.451632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.452172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.452210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.452741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.453260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.453298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.453586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.454098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.454136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.455817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.456272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.456286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.456764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.457142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.457181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.457627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.458088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.458128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.458584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.459117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.459155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.459630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.460123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.460134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.460523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.461001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.461039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.461563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.462083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.462120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.462635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.463176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.463187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.463583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.464023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.464062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.464569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.465014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.465052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.465513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.466013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.466051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.466589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.467067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.467105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.467618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.467998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.468037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.468504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.469024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.469062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.469594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.470024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.470062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.470500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.470937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.470975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.471459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.471844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.471884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.472304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.472720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.472760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.473226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.473721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.473759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.474269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.474631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.474669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.475214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.475652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.475694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.476208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.476706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.476740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.670 qpair failed and we were unable to recover it. 00:26:06.670 [2024-04-24 21:41:29.477187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.670 [2024-04-24 21:41:29.477651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.477662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.478076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.478544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.478583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.478992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.479438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.479487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.479847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.480313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.480351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.480880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.481334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.481373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.481912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.482437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.482485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.482929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.483304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.483343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.483847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.484343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.484380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.484782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.485280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.485318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.485821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.486260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.486299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.486696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.487207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.487245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.487687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.488118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.488156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.488609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.489050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.489088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.489559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.490024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.490063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.490588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.491035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.491073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.491525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.491968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.492007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.492512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.492970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.493007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.493510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.493958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.493997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.494504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.495023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.495061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.495592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.496073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.496111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.496500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.496946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.496983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.497374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.497828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.497868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.498306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.498739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.498777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.671 qpair failed and we were unable to recover it. 00:26:06.671 [2024-04-24 21:41:29.499157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.671 [2024-04-24 21:41:29.499540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.499579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.500049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.500484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.500522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.500960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.501382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.501393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.501824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.502323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.502361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.502742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.503223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.503234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.503653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.503996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.504034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.504404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.504788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.504828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.505207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.505587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.505598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.505981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.506499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.506537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.506935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.507393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.507431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.507702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.508099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.508137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.508649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.509134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.509173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.509615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.510016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.510054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.510583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.511023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.511061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.511567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.512062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.512100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.512560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.513026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.513064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.513472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.513916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.513960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.514317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.514693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.514704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.515108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.515548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.515587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.516025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.516202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.516213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.516655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.517142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.517181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.517635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.518095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.518134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.518583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.519080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.519119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.519532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.520054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.520093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.520481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.520936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.520974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.521479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.521842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.521881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.522346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.522758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.522803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.523242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.523695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.523734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.672 qpair failed and we were unable to recover it. 00:26:06.672 [2024-04-24 21:41:29.524201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-04-24 21:41:29.524586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.524624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.524997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.525495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.525534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.525991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.526479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.526491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.526879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.527074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.527086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.527537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.527990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.528028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.528555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.528993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.529031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.529421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.529906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.529946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.530472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.530692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.530729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.531167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.531625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.531671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.532077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.532564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.532603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.533049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.533515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.533553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.534007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.534389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.534426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.534901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.535361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.535399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.535845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.536388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.536426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.536870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.537365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.537404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.537867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.538337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.538375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.538778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.539253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.539264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.539641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.540012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.540051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.540576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.540825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.540869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.541270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.541461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.541500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.541944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.542340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.542378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.542869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.543367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.543405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.543866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.544411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.544422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.544808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.545206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.545272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.545690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.546069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.546081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.546539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.547069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.547108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.547614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.548127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.548170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.548627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.549103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.549145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.673 [2024-04-24 21:41:29.549635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.550067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-04-24 21:41:29.550086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.673 qpair failed and we were unable to recover it. 00:26:06.938 [2024-04-24 21:41:29.550499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.550837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.550857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.938 qpair failed and we were unable to recover it. 00:26:06.938 [2024-04-24 21:41:29.551259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.551647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.551660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.938 qpair failed and we were unable to recover it. 00:26:06.938 [2024-04-24 21:41:29.551996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.552367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.552406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.938 qpair failed and we were unable to recover it. 00:26:06.938 [2024-04-24 21:41:29.552616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.552993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.553031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.938 qpair failed and we were unable to recover it. 00:26:06.938 [2024-04-24 21:41:29.553422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.553909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.553948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.938 qpair failed and we were unable to recover it. 00:26:06.938 [2024-04-24 21:41:29.554419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.554738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.554750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.938 qpair failed and we were unable to recover it. 00:26:06.938 [2024-04-24 21:41:29.555224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.555720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.555758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.938 qpair failed and we were unable to recover it. 00:26:06.938 [2024-04-24 21:41:29.556136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.556497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.556509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.938 qpair failed and we were unable to recover it. 00:26:06.938 [2024-04-24 21:41:29.556930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.557299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.557337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.938 qpair failed and we were unable to recover it. 00:26:06.938 [2024-04-24 21:41:29.557802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.558335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.558372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.938 qpair failed and we were unable to recover it. 00:26:06.938 [2024-04-24 21:41:29.558884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.559242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.559253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.938 qpair failed and we were unable to recover it. 00:26:06.938 [2024-04-24 21:41:29.559413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.559731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.559770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.938 qpair failed and we were unable to recover it. 00:26:06.938 [2024-04-24 21:41:29.560146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.560565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.560576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.938 qpair failed and we were unable to recover it. 00:26:06.938 [2024-04-24 21:41:29.561054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.561460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.561500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.938 qpair failed and we were unable to recover it. 00:26:06.938 [2024-04-24 21:41:29.561883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.562145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.562158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.938 qpair failed and we were unable to recover it. 00:26:06.938 [2024-04-24 21:41:29.562618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.562991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.563004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.938 qpair failed and we were unable to recover it. 00:26:06.938 [2024-04-24 21:41:29.563404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.563715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.563728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.938 qpair failed and we were unable to recover it. 00:26:06.938 [2024-04-24 21:41:29.564110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.564508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.938 [2024-04-24 21:41:29.564547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.938 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.564980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.565373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.565411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.565880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.566318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.566329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.566747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.567181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.567219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.567606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.567811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.567850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.568228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.568643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.568682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.569159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.569608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.569646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.570059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.570598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.570637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.571126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.571582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.571621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.572006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.572512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.572552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.573063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.573486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.573498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.573815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.574279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.574291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.574764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.575093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.575132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.575543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.575918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.575956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.576412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.576862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.576901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.577359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.577829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.577868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.578322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.578788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.578827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.579354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.579800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.579838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.580269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.580710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.580749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.581141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.581655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.581694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.582250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.582751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.582791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.583255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.583690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.583729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.584168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.584547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.584586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.585102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.585568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.585607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.586080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.586518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.586557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.586998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.587517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.587556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.588044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.588251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.588289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.588732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.589226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.589263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.589698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.590027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.590065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.939 qpair failed and we were unable to recover it. 00:26:06.939 [2024-04-24 21:41:29.590471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.939 [2024-04-24 21:41:29.590825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.590863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.591322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.591836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.591875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.592326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.592826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.592865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.594356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.594769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.594814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.595235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.595621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.595633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.596100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.596254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.596266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.596598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.597053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.597065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.597281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.597665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.597677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.598084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.598466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.598478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.598944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.599269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.599280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.599673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.600057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.600069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.600511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.600783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.600795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.601180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.601498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.601510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.601664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.602104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.602115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.602465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.602774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.602787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.603267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.603585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.603597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.604066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.604462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.604474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.604690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.605148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.605159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.605645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.606032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.606044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.606487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.606985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.606997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.607377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.607793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.607804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.608269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.608712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.608724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.609198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.609655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.609667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.610134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.610574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.610587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.610914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.611304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.611316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.611710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.612184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.612196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.612609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.612983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.612994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.613376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.613836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.613848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.940 [2024-04-24 21:41:29.614315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.614687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.940 [2024-04-24 21:41:29.614699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.940 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.615120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.615585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.615596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.616042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.616413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.616424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.616924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.617309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.617320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.617765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.618230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.618242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.618645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.619133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.619145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.619530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.619929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.619941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.620392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.620748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.620760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.621126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.621505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.621517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.621983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.622376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.622388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.622806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.623264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.623276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.623748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.624081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.624093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.624476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.624940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.624952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.625172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.625639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.625651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.626131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.626608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.626620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.627040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.627433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.627445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.627908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.628282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.628294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.628738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.629102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.629114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.629578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.630021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.630032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.630420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.630862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.630874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.631027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.631434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.631446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.631911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.632377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.632388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.632833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.633294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.633305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.633704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.634146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.634158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.634570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.634943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.634954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.635420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.635785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.635797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.636191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.636653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.636665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.637154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.637592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.637604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.637982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.638401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.638412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.941 qpair failed and we were unable to recover it. 00:26:06.941 [2024-04-24 21:41:29.638806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.941 [2024-04-24 21:41:29.639246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.639257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.639671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.640082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.640093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.640509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.640972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.640984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.641357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.641809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.641821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.642264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.642682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.642694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.643138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.643525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.643537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.643761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.644177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.644189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.644654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.645068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.645080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.645420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.645865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.645877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.646189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.646629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.646641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.647051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.647511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.647523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.647827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.648242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.648254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.648630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.649040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.649052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.649515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.649976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.649988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.650365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.650779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.650791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.651236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.651621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.651632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.652099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.652464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.652476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.652948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.653437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.653455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.653921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.654381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.654393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.654768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.655148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.655159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.655555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.656016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.656028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.656517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.656936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.656948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.657343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.657563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.657575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.657968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.658348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.658360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.942 qpair failed and we were unable to recover it. 00:26:06.942 [2024-04-24 21:41:29.658759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.942 [2024-04-24 21:41:29.659194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.659206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.659624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.659824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.659835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.660226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.660663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.660675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.661142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.661527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.661541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.661919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.662309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.662321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.662698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.663112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.663123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.663515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.663951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.663963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.664420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.664831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.664843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.665161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.665312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.665324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.665786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.666256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.666268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.666672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.667009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.667020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.667403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.667790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.667802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.668300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.668692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.668704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.669150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.669558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.669572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.669974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.670439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.670455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.670899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.671363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.671375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.671826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.672222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.672233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.672679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.673111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.673123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.673512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.673956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.673968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.674274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.674647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.674659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.675036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.675400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.675411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.675812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.676186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.676197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.676540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.676918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.676929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.677364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.677735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.677749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.678214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.678676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.678688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.679153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.679602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.679614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.680034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.680401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.680413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.680812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.681196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.681208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.943 qpair failed and we were unable to recover it. 00:26:06.943 [2024-04-24 21:41:29.681431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.943 [2024-04-24 21:41:29.681822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.681835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.682306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.682525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.682537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.683007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.683447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.683462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.683872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.684249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.684260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.684642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.685011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.685023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.685493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.685903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.685914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.686309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.686703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.686716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.687115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.687557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.687569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.688033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.688453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.688465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.688852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.689289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.689301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.689745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.690140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.690152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.690519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.690976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.690987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.691379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.691842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.691854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.692232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.692620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.692632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.692975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.693344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.693356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.693670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.694075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.694087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.694552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.694932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.694944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.695409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.695872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.695883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.696256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.696662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.696673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.697163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.697646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.697658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.698036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.698360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.698372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.698791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.699261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.699273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.699586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.699960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.699972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.700348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.700722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.700734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.701143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.701527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.701539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.701929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.702369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.702380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.702706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.703088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.703100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.703308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.703751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.703762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.944 [2024-04-24 21:41:29.704238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.704618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.944 [2024-04-24 21:41:29.704629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.944 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.705119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.705561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.705573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.705970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.706344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.706356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.706755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.707194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.707206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.707674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.708061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.708073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.708538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.708950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.708962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.709405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.709810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.709822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.710292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.710736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.710748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.710896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.711313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.711324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.711782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.712121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.712133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.712508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.712970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.712982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.713304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.713767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.713779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.714246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.714637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.714649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.715095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.715444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.715460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.715928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.716337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.716349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.716790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.717179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.717190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.717656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.718100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.718112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.718500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.718874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.718886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.719275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.719588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.719600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.720044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.720362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.720374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.720690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.721087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.721099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.721468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.721621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.721632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.722028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.722462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.722474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.722881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.723099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.723110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.723511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.723975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.723987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.724448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.724895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.724907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.725362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.725702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.725714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.726180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.726637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.726649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.727118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.727566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.727578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.945 qpair failed and we were unable to recover it. 00:26:06.945 [2024-04-24 21:41:29.728041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.728521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.945 [2024-04-24 21:41:29.728532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.728922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.729328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.729340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.729750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.730213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.730224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.730432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.730894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.730906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.731367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.731809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.731821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.732236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.732627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.732638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.733007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.733445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.733467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.733846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.734231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.734243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.734727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.734872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.734883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.735350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.735758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.735771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.736168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.736557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.736569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.736981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.737419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.737431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.737857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.738272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.738283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.738741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.739158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.739169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.739548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.739942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.739954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.740396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.740837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.740849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.741243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.741701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.741713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.742107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.742547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.742559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.743023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.743471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.743483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.743875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.744356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.744367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.744776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.745239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.745251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.745699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.746141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.746153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.746642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.747101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.747113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.747523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.747982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.747994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.748486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.748872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.748884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.749224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.749683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.749695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.750137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.750525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.750537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.750920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.751364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.751376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.946 [2024-04-24 21:41:29.751791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.752161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.946 [2024-04-24 21:41:29.752173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.946 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.752571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.752713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.752725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.753116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.753505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.753517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.753941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.754403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.754415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.754766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.755229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.755241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.755610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.756052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.756064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.756436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.756923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.756936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.757426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.757571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.757583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.758064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.758516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.758528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.758991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.759380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.759392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.759802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.760265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.760277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.760728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.761187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.761199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.761341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.761734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.761746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.761875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.762315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.762326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.762736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.763193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.763205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.763695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.764116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.764127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.764508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.764915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.764926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.765325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.765765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.765777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.766222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.766615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.766637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.767006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.767469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.767481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.767945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.768407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.768419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.768640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.769042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.769053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.769519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.769980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.769992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.770460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.770904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.770916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.771368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.771840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.771852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.947 qpair failed and we were unable to recover it. 00:26:06.947 [2024-04-24 21:41:29.772317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.947 [2024-04-24 21:41:29.772705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.772717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.773178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.773612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.773623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.774090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.774497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.774509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.774883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.775249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.775261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.775679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.776064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.776076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.776465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.776874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.776886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.777350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.777722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.777734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.778199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.778634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.778646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.779111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.779554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.779566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.779976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.780172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.780183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.780603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.781068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.781080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.781526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.781899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.781911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.782373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.782766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.782777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.783149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.783522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.783534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.783924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.784386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.784398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.784816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.785221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.785233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.785624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.786026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.786040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.786483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.786878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.786890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.787208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.787658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.787670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.788132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.788550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.788562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.788904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.789306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.789318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.789652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.790121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.790133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.790554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.791027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.791039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.791461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.791851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.791863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.792334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.792772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.792783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.793178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.793614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.793625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.794091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.794467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.794481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.794623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.794747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.794758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.948 qpair failed and we were unable to recover it. 00:26:06.948 [2024-04-24 21:41:29.795223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.795630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.948 [2024-04-24 21:41:29.795642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.796092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.796533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.796544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.796894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.797306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.797317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.797726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.798188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.798226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.798666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.799162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.799200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.799649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.800137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.800149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.800610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.801034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.801072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.801601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.802108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.802146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.802607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.802966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.803011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.803354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.803874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.803913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.804399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.804929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.804968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.805528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.806001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.806039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.806611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.807078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.807116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.807628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.808090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.808101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.808579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.808972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.809010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.809474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.809872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.809911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.810421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.810904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.810943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.811493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.811989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.812028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.812560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.813007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.813050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.813535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.813998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.814010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.814505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.814903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.814918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.815373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.815818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.815832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.816194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.816662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.816674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.817118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.817587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.817600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.818076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.818630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.949 [2024-04-24 21:41:29.818669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:06.949 qpair failed and we were unable to recover it. 00:26:06.949 [2024-04-24 21:41:29.819216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.213 [2024-04-24 21:41:29.819544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.213 [2024-04-24 21:41:29.819557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.213 qpair failed and we were unable to recover it. 00:26:07.213 [2024-04-24 21:41:29.820028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.213 [2024-04-24 21:41:29.820388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.213 [2024-04-24 21:41:29.820402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.213 qpair failed and we were unable to recover it. 00:26:07.213 [2024-04-24 21:41:29.820807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.213 [2024-04-24 21:41:29.821215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.213 [2024-04-24 21:41:29.821253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.213 qpair failed and we were unable to recover it. 00:26:07.213 [2024-04-24 21:41:29.821664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.213 [2024-04-24 21:41:29.822056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.213 [2024-04-24 21:41:29.822094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.213 qpair failed and we were unable to recover it. 00:26:07.213 [2024-04-24 21:41:29.822623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.213 [2024-04-24 21:41:29.823071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.213 [2024-04-24 21:41:29.823110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.213 qpair failed and we were unable to recover it. 00:26:07.213 [2024-04-24 21:41:29.823665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.213 [2024-04-24 21:41:29.824118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.213 [2024-04-24 21:41:29.824156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.213 qpair failed and we were unable to recover it. 00:26:07.213 [2024-04-24 21:41:29.824609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.213 [2024-04-24 21:41:29.825010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.213 [2024-04-24 21:41:29.825048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.213 qpair failed and we were unable to recover it. 00:26:07.213 [2024-04-24 21:41:29.825566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.213 [2024-04-24 21:41:29.825954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.213 [2024-04-24 21:41:29.825992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.213 qpair failed and we were unable to recover it. 00:26:07.213 [2024-04-24 21:41:29.826538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.213 [2024-04-24 21:41:29.826988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.827026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.827587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.828107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.828145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.828678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.829156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.829193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.829697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.830140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.830152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.830572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.831040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.831079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.831533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.831932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.831970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.832558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.832956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.832995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.833543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.833993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.834031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.834548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.834998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.835036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.835559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.835951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.835989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.836525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.837030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.837069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.837551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.837994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.838018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.838425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.838800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.838813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.839234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.839575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.839588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.840013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.840478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.840506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.840841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.841271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.841283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.841705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.842142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.842154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.842637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.842983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.842995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.843506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.844545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.844569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.844976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.845459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.845471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.845963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.846438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.846456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.846954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.847455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.847468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.847782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.848127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.848139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.848635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.849141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.849153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.849652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.850146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.850158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.850634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.851045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.851057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.851483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.851925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.851937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.852384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.852840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.852852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.214 [2024-04-24 21:41:29.853201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.853599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.214 [2024-04-24 21:41:29.853611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.214 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.854032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.854448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.854464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.854886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.855278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.855290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.855762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.856151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.856163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.856607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.857068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.857080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.857573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.857969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.857980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.858446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.858818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.858830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.859223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.859688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.859701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.860102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.860562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.860575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.860914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.861327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.861339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.861837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.862335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.862374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.862836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.863334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.863372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.863847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.864296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.864334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.864875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.865412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.865460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.866026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.866540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.866579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.867038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.867556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.867597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.868069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.868502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.868542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.868999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.869520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.869559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.869941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.870503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.870559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.871042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.871574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.871613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.872166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.872682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.872721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.873218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.873692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.873703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.874184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.874632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.874671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.875175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.875701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.875740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.876179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.876609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.876649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.877098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.877622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.877634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.878036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.878534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.878573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.879034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.879474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.879514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.879919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.880513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.880552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.215 qpair failed and we were unable to recover it. 00:26:07.215 [2024-04-24 21:41:29.881062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.215 [2024-04-24 21:41:29.881531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.881571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.882034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.882553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.882591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.883102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.883554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.883594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.884120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.884644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.884684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.885254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.885774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.885813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.886289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.886759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.886798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.887250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.887747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.887786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.888244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.888758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.888797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.889209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.889672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.889711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.890121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.890561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.890600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.891059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.891558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.891597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.892105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.892631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.892670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.893193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.893727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.893766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.894246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.894691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.894704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.895044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.895465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.895505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.895963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.896555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.896595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.896999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.897432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.897479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.897931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.898406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.898445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.898933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.899338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.899350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.899807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.900139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.900178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.900689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.901099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.901110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.901577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.902051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.902088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.902618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.903023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.903062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.903624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.904171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.904209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.904663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.905130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.905168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.905668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.906129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.906168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.906693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.907125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.907137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.907618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.908021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.908059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.216 [2024-04-24 21:41:29.908515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.908982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.216 [2024-04-24 21:41:29.909020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.216 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.909527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.909985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.910023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.910533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.910942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.910980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.911466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.911990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.912028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.912484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.912966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.912978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.913391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.913803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.913841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.914331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.914793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.914832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.915401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.915815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.915854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.916325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.916832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.916871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.917415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.917883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.917923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.918449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.918925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.918963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.919447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.919867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.919906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.920360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.920859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.920899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.921407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.921863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.921903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.922389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.922909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.922921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.923383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.923812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.923851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.924468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.924829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.924841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.925265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.925715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.925755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.926205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.926553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.926565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.927030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.927548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.927587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.928063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.928596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.928635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.929096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.929621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.929666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.930200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.930607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.930647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.931048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.931476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.931488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.931879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.932286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.932325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.932822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.933224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.933262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.933802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.934239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.934277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.934795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.935273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.935312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.935890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.936294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.936333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.217 qpair failed and we were unable to recover it. 00:26:07.217 [2024-04-24 21:41:29.936809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.217 [2024-04-24 21:41:29.937233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.937245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.937713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.938193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.938232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.938823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.939335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.939349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.940983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.941415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.941430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.941937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.942491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.942531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.942976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.943499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.943538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.945226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.945750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.945794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.946262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.946783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.946823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.947330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.947808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.947848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.948211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.948761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.948801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.949296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.949823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.949862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.950428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.950853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.950892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.951385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.951846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.951893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.952370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.952896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.952936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.953414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.953933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.953972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.954521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.954976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.955015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.955558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.956015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.956053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.956474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.957000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.957039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.957636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.958084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.958123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.958630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.959129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.959167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.959663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.960062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.960101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.960590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.961088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.961127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.218 qpair failed and we were unable to recover it. 00:26:07.218 [2024-04-24 21:41:29.961683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.962137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.218 [2024-04-24 21:41:29.962181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.962714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.963157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.963168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.963646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.964133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.964171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.964704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.965246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.965285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.965766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.966206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.966217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.966628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.967154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.967192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.967769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.968172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.968210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.968700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.969104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.969143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.969694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.970072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.970111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.970635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.971089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.971128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.971629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.972165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.972204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.972742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.973207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.973246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.973704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.974213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.974251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.974650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.975114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.975152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.975612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.976121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.976160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.976639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.977151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.977189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.977752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.978201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.978239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.978693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.979195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.979234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.979793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.980217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.980255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.980842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.981320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.981358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.981893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.982376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.982415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.982831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.983330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.983342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.983817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.984406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.984444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.984950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.985399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.985437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.985911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.986421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.986472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.986987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.987530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.987571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.988120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.988650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.988690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.989249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.989706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.989745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.990298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.990803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.990842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.219 qpair failed and we were unable to recover it. 00:26:07.219 [2024-04-24 21:41:29.991390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.219 [2024-04-24 21:41:29.991864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:29.991904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:29.992470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:29.992971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:29.993009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:29.993517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:29.994018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:29.994057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:29.994599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:29.995054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:29.995093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:29.995647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:29.996195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:29.996234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:29.996777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:29.997218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:29.997257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:29.997779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:29.998254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:29.998298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:29.998772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:29.999291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:29.999330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:29.999804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.000275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.000287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.000802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.001188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.001200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.001676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.002122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.002134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.002551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.002951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.002963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.003390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.003889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.003951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.004574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.005131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.005161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.005617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.005980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.005994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.006423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.006791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.006804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.007248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.007721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.007734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.008142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.008540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.008553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.008956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.009388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.009401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.009815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.010274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.010286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.010696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.011116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.011128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.011534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.011986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.011998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.012425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.012849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.012862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.013186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.013583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.013596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.013994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.014396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.014408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.014879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.015277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.015289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.015790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.016210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.016222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.016692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.017090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.017102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.017568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.017967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.220 [2024-04-24 21:41:30.017979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.220 qpair failed and we were unable to recover it. 00:26:07.220 [2024-04-24 21:41:30.018333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.018821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.018833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.019201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.019691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.019703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.020107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.020524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.020537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.020944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.021384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.021395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.021796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.022261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.022273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.022686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.023088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.023100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.023570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.023964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.023976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.024469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.024857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.024869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.025296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.025785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.025797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.026197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.026605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.026617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.027065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.027507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.027519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.028010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.028469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.028481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.028879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.029199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.029211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.029681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.030053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.030065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.030550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.030945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.030957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.031410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.031814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.031826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.032269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.032745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.032757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.033107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.033581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.033593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.034062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.034446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.034463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.034866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.035201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.035215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.035565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.035996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.036009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.036415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.036835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.036849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.037283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.037746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.037758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.038232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.038689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.038701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.039037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.039510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.039522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.039937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.040401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.040413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.040766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.041184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.041196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.041692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.042088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.042100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.221 [2024-04-24 21:41:30.042576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.042958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.221 [2024-04-24 21:41:30.042970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.221 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.043482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.043928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.043940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.044363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.044854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.044866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.045347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.045829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.045842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.046267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.046724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.046736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.047150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.047611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.047623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.047928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.048382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.048394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.048813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.049258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.049270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.049654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.050041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.050053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.050470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.050846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.050858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.051330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.051770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.051783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.052196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.052645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.052658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.053103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.053567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.053578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.054050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.054529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.054540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.055027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.055433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.055444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.055851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.056258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.056270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.056607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.056956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.056967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.057449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.057898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.057910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.058322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.058716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.058728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.059138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.059603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.059615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.060020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.060511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.060522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.060868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.061319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.061331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.061717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.062176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.062188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.062635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.063024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.063045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.063532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.063927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.063938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.222 qpair failed and we were unable to recover it. 00:26:07.222 [2024-04-24 21:41:30.064374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.222 [2024-04-24 21:41:30.064764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.064790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.065188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.065665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.065677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.066190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.066521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.066532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.066925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.067340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.067351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.067682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.068011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.068023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.068425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.068807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.068819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.069237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.069543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.069554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.069893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.070345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.070356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.070801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.071245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.071256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.071656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.072058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.072070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.072415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.072790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.072803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.073224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.073596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.073608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.074093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.074536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.074548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.074931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.075250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.075261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.075649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.076054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.076073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.076388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.076786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.076798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.077262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.077651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.077663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.077993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.078435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.078446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.078833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.079206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.079217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.079659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.080122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.080134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.080447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.080767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.080782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.081261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.081665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.081677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.082123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.082586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.082598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.083017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.083382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.083394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.083846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.084151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.084162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.084540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.084951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.084962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.085427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.085920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.085932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.086285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.086688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.086700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.087144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.087604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.223 [2024-04-24 21:41:30.087616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.223 qpair failed and we were unable to recover it. 00:26:07.223 [2024-04-24 21:41:30.088059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.224 [2024-04-24 21:41:30.088366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.224 [2024-04-24 21:41:30.088377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.224 qpair failed and we were unable to recover it. 00:26:07.224 [2024-04-24 21:41:30.088797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.224 [2024-04-24 21:41:30.089239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.224 [2024-04-24 21:41:30.089252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.224 qpair failed and we were unable to recover it. 00:26:07.224 [2024-04-24 21:41:30.089642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.224 [2024-04-24 21:41:30.090052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.224 [2024-04-24 21:41:30.090064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.224 qpair failed and we were unable to recover it. 00:26:07.224 [2024-04-24 21:41:30.090441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.224 [2024-04-24 21:41:30.090758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.224 [2024-04-24 21:41:30.090772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.224 qpair failed and we were unable to recover it. 00:26:07.224 [2024-04-24 21:41:30.091246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.224 [2024-04-24 21:41:30.091709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.224 [2024-04-24 21:41:30.091723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.224 qpair failed and we were unable to recover it. 00:26:07.224 [2024-04-24 21:41:30.092171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.224 [2024-04-24 21:41:30.092497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.224 [2024-04-24 21:41:30.092509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.224 qpair failed and we were unable to recover it. 00:26:07.224 [2024-04-24 21:41:30.092899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.224 [2024-04-24 21:41:30.093329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.224 [2024-04-24 21:41:30.093344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.224 qpair failed and we were unable to recover it. 00:26:07.224 [2024-04-24 21:41:30.093789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.224 [2024-04-24 21:41:30.094232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.224 [2024-04-24 21:41:30.094243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.224 qpair failed and we were unable to recover it. 00:26:07.224 [2024-04-24 21:41:30.094711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-04-24 21:41:30.095185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-04-24 21:41:30.095199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-04-24 21:41:30.095665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-04-24 21:41:30.096003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-04-24 21:41:30.096017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-04-24 21:41:30.096466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-04-24 21:41:30.096922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-04-24 21:41:30.096934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-04-24 21:41:30.097429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-04-24 21:41:30.097899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-04-24 21:41:30.097913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-04-24 21:41:30.098307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-04-24 21:41:30.098689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-04-24 21:41:30.098701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.099146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.099604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.099616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.100122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.100587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.100599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.101021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.101485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.101497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.101942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.102416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.102428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.102852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.103278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.103289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.103741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.104212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.104223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.104614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.105004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.105016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.105422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.105886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.105897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.106244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.106681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.106695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.107168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.107590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.107602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.107991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.108432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.108443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.108887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.109350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.109361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.109804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.110268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.110279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.110770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.111247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.111259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.111726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.112102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.112114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.112557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.113022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.113034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.113432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.113893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.113906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.114294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.114690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.114702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.115166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.115626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.115638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.116093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.116550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.116562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.117030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.117496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.117508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.117946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.118328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.118339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.118741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.119128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.119140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.119608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.120074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.120085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.120530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.120995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.121006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.121498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.121824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.121836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.122278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.122741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.122753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.123203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.123595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.123607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.124076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.124498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.124511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.124977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.125369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.125381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.125782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.126245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.126257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.126747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.127201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.127212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.127522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.127982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.127994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.128438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.128832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.128844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.129235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.129728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.129741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.130214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.130636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.130648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.131115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.131585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.131625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.132202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.132667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.132706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-04-24 21:41:30.133238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.133751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-04-24 21:41:30.133790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.134377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.134931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.134969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.135497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.136080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.136118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.136675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.137125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.137164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.137645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.138169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.138208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.138744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.139189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.139229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.139721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.140240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.140278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.140828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.141257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.141295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.141819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.142312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.142350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.142801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.143324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.143371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.143874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.144351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.144390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.144956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.145403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.145442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.145972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.146482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.146522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.147103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.147583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.147622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.148163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.148641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.148680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.149150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.149666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.149705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.150202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.150667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.150679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.151046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.151485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.151497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.151972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.152525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.152537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.152929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.153408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.153446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.153909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.154474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.154513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.155064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.155556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.155568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.156012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.156405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.156417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.156809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.157331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.157369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.157873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.158418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.158478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.159013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.159513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.159553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.160080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.160596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.160635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.161186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.161632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.161671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.162217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.162760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.162799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.163271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.163676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.163716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.164261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.164776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.164787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.165197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.165722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.165762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.166321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.166739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.166751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.167196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.167568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.167607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.168137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.168697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.168735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.169198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.169676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.169688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.170152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.170624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.170663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.171185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.171579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.171618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.172151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.172701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.172713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.173178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.173691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.173702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.174145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.174566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-04-24 21:41:30.174578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-04-24 21:41:30.174973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.175493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.175527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.175992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.176472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.176484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.176905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.177247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.177259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.177643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.178112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.178123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.178532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.178922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.178934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.179362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.179884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.179924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.180534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.180978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.181016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.181511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.181909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.181948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.182472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.183017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.183055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.183605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.184153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.184192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.184742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.185265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.185302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.185878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.186265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.186276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.186733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.187124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.187136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.187611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.188187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.188226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.188809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.189302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.189313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.189766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.190114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.190152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.190686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.191247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.191285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.191816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.192274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.192313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.192858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.193355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.193366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.193780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.194244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.194283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.194817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.195351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.195363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.195837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.196333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.196372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.196919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.197438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.197486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.198021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.198560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.198571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.199074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.199621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.199659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.200217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.200737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.200776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.201358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.201866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.201905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.202377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.202901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.202940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.203493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.204021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.204059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.204562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.205100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.205139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.205717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.206219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.206257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.206820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.207291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.207330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.207877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.208400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.208412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.208812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.209334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.209372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.209919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.210490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.210529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.211057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.211577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.211616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.212175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.212697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.212736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.213165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.213708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.213747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.214217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.214712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.214750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.215292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.215740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.215780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-04-24 21:41:30.216255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-04-24 21:41:30.216708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.216747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.217252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.217781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.217819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.218381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.218856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.218896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.219372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.219758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.219799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.220349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.220874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.220913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.221467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.222011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.222050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.222606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.223128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.223166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.223629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.224071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.224109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.224637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.225188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.225227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.225777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.226236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.226274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.226807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.227372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.227410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.227963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.228436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.228487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.229049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.229498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.229537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.229997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.230503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.230542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.231097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.231613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.231652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.232218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.232741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.232779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.233376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.233926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.233965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.234460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.234980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.235018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.235595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.236051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.236090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.236627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.237178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.237216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.237751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.238259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.238303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.238833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.239266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.239304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.239760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.240234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.240273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.240803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.241366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.241405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.241955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.242424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.242473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.243045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.243569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.243609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.244150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.244674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.244713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.245166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.245663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.245701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.246257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.246786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.246825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.247378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.247930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.247969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.248524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.249040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.249084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.249659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.250177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.250215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.250778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.251251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.251289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.251823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.252321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-04-24 21:41:30.252332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-04-24 21:41:30.252778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.253256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.253294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.253870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.254393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.254432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.254992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.255423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.255474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.256033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.256487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.256527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.256987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.257506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.257545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.258124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.258638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.258677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.259243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.259761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.259775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.260229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.260730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.260769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.261327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.261794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.261843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.262315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.262862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.262909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.263339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.263857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.263895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.264473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.264980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.265018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.265580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.266086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.266097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.266567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.267076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.267115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.267651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.268204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.268243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.268788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.269172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.269184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.269665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.270228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.270272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.270847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.271390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.271428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.271919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.272444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.272494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.273057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.273497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.273536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.274049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.274598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.274638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.275119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.275620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.275660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.276203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.276777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.276815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.277328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.277875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.277914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.278484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.278995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.279033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.279593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.280113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.280152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.280737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.281280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.281318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.281835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.282358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.282369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.282782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.283244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.283283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.283813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.284380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.284418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.284999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.285504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.285543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.286118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.286592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.286655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.287132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.287660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.287700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.288158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.288640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.288679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.289209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.289730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.289771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.290331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.290861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.290901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.291469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.291939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.291978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.292542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.293068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.293106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.293689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.294197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.294209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.294711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.295212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.295250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-04-24 21:41:30.295788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-04-24 21:41:30.296315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.296353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.296960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.297519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.297559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.298134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.298652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.298691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.299226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.299757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.299797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.300366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.300877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.300916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.301480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.301994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.302032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.302514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.303014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.303052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.303614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.304126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.304138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.304566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.304961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.304973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.305434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.305973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.306011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.306553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.307092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.307131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.307613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.308125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.308138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.308572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.309027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.309066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.309545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.310052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.310090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.310655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.311150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.311162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.311652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.312213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.312251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.312761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.313293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.313331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.313878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.314309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.314347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.314848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.315329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.315366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.315890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.316423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.316469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.317032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.317556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.317596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.318054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.318609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.318648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.319218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.319778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.319817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.320380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.320917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.320956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.321547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.322047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.322086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.322628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.323196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.323235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.323709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.324249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.324287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.324778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.325295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.325333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.325791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.326330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.326368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.326775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.327252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.327290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.327876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.328307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.328346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.328911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.329398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.329437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.329985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.330556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.330595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.331065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.331522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.331562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.332083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.332620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.332660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.333158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.333687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.333726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.334267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.334790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.334829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.335421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.335991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.336003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.336521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.337120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.337159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.337706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.338264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.338303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.338847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.339358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.339395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-04-24 21:41:30.339942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-04-24 21:41:30.340535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.340574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.341038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.341524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.341563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.342129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.342682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.342721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.343265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.343830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.343870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.344340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.344776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.344815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.345377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.345886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.345939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.346441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.346912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.346951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.347492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.348023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.348061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.348644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.349166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.349214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.349691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.350103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.350141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.350694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.351250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.351289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.351867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.352376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.352416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.352988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.353512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.353552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.354141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.354641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.354680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.355224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.355793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.355846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.356312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.356847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.356893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.357412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.357937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.357979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.358472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.358977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.358989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.359479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.359991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.360030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.360595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.361053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.361091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.361636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.362179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.362217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.362758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.363334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.363372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.363944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.364475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.364515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.365096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.365558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.365572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.366031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.366526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.366578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.367195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.367653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.367667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-04-24 21:41:30.368084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.368654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-04-24 21:41:30.368705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.759 [2024-04-24 21:41:30.369204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.759 [2024-04-24 21:41:30.369689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.759 [2024-04-24 21:41:30.369711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.759 qpair failed and we were unable to recover it. 00:26:07.759 [2024-04-24 21:41:30.370212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.759 [2024-04-24 21:41:30.370787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.759 [2024-04-24 21:41:30.370828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.759 qpair failed and we were unable to recover it. 00:26:07.759 [2024-04-24 21:41:30.371396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.759 [2024-04-24 21:41:30.371954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.759 [2024-04-24 21:41:30.371995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.759 qpair failed and we were unable to recover it. 00:26:07.759 [2024-04-24 21:41:30.372552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.759 [2024-04-24 21:41:30.372980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.759 [2024-04-24 21:41:30.373018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.373603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.374124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.374169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.374720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.375161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.375199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.375691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.376165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.376176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.376682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.377137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.377148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.377568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.378023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.378034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.378515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.379015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.379026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.379483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.379945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.379957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.380300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.380731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.380743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.381220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.381734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.381795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.382329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.382815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.382830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.383265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.383751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.383764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.384239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.384699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.384712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.385230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.385810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.385849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.386344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.386857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.386870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.387380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.387949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.387997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.388465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.388950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.388963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.389472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.389877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.389916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.390299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.390812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.390824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.391313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.391815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.391855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.392363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.392833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.392871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.393414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.393989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.394028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.394617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.395130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.395142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.395697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.396272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.396310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.396856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.397392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.397406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.397863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.398322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.398334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.398816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.399227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.399242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.399597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.400077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.400089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.760 [2024-04-24 21:41:30.400596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.401073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.760 [2024-04-24 21:41:30.401084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.760 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.401512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.401952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.401963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.402352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.402735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.402747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.403153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.403601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.403613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.404098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.404604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.404616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.405060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.405532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.405545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.406001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.406555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.406567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.407061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.407587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.407599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.408059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.408512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.408527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.408876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.409301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.409314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.409824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.410328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.410340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.410839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.411342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.411355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.411795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.412150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.412162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.412592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.413071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.413084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.413511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.413977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.413989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.414429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.414845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.414858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.415264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.415662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.415675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.416066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.416541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.416553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.416955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.417337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.417352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.417813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.418287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.418299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.418691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.419173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.419185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.419604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.420054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.420067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.420503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.420971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.420983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.421410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.421813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.421825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.422228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.422687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.422699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.423052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.423509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.423521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.423994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.424474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.424486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.424987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.425490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.425502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.425914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.426391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.426406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.761 qpair failed and we were unable to recover it. 00:26:07.761 [2024-04-24 21:41:30.426904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.761 [2024-04-24 21:41:30.427387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.427399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.427774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.428222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.428235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.428712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.429212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.429224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.429715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.430173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.430185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.430658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.431146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.431158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.431545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.431962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.431975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.432426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.432870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.432882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.433381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.433859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.433871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.434284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.434675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.434687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.435159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.435543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.435555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.436055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.436501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.436514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.436988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.437463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.437475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.437972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.438400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.438412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.438881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.439337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.439349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.439813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.440216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.440228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.440641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.441021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.441033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.441349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.441807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.441819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.442293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.442733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.442745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.443217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.443690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.443702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.444151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.444522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.444534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.444984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.445457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.445470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.445964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.446442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.446465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.446959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.447430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.447442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.447955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.448427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.448478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.448949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.449423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.449471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.450037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.450531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.450570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.451168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.451655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.451667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.452167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.452595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.452635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.762 [2024-04-24 21:41:30.453203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.453664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.762 [2024-04-24 21:41:30.453676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.762 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.454054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.454478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.454517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.455090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.455606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.455645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.456202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.456746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.456785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.457317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.457837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.457876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.458437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.458926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.458964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.459516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.460047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.460059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.460544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.461103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.461142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.461673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.462189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.462227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.462813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.463396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.463433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.463927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.464447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.464490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.464909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.465375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.465387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.465990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.466512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.466551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.467034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.467545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.467584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.468150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.468668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.468696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.469041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.469441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.469456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.469953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.470520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.470560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.471110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.471648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.471661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.472136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.472606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.472619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.473112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.473570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.473583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.474028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.474474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.474486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.474954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.475421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.475433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.475933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.476431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.476443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.476970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.477464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.477476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.477930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.478337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.478349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.478770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.479072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.479084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.479505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.479949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.479961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.480419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.480796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.480808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.481293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.481768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.481826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.763 qpair failed and we were unable to recover it. 00:26:07.763 [2024-04-24 21:41:30.482342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.763 [2024-04-24 21:41:30.482812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.482824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.483216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.483613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.483652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.484206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.484721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.484760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.485212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.485717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.485757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.486330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.486852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.486891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.487474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.487987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.488025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.488492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.489014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.489051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.489620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.490142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.490180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.490742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.491201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.491239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.491769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.492168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.492207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.492734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.493176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.493189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.493670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.493973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.493985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.494467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.494932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.494970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.495507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.495885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.495923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.496444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.496953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.496965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.497444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.497981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.498020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.498557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.499120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.499158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.499724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.500228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.500265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.500785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.501230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.501269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.501735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.502225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.502263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.502608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.503130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.503168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.503729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.504254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.504292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.504792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.505251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.764 [2024-04-24 21:41:30.505289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.764 qpair failed and we were unable to recover it. 00:26:07.764 [2024-04-24 21:41:30.505809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.506255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.506266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.506667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.507169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.507207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.507546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.507924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.507961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.508515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.509053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.509091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.509548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.509942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.509981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.510491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.510992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.511030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.511547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.511994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.512033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.512469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.512900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.512938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.513281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.513803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.513842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.514414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.514863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.514902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.515375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.515854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.515893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.516347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.516843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.516882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.517327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.517847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.517886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.518401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.518832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.518844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.519269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.519791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.519830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.520292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.520740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.520780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.521231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.521689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.521701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.522179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.522624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.522662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.523104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.523649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.523661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.524131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.524680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.524719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.525279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.525784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.525796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.526246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.526635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.526674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.527204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.527767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.527806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.528255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.528713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.528753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.529264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.529723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.529735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.530182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.530581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.530621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.531174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.531747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.531785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.532301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.532825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.532864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.765 qpair failed and we were unable to recover it. 00:26:07.765 [2024-04-24 21:41:30.533421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.533952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.765 [2024-04-24 21:41:30.533991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.534529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.534979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.535018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.535479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.536002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.536041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.536593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.537044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.537082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.537560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.538070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.538108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.538666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.539175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.539214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.539775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.540289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.540327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.540892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.541391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.541430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.541933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.542398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.542410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.542916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.543465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.543505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.543969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.544550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.544590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.545069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.545596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.545636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.546214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.546748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.546787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.547302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.547832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.547871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.548426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.548919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.548958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.549494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.550057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.550095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.550621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.551021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.551059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.551610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.552070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.552108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.552655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.553113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.553152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.553662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.554169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.554207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.554685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.555200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.555211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.555725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.556297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.556336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.556875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.557388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.557402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.557891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.558363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.558402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.558962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.559500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.559539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.560094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.560610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.560649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.561134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.561572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.561612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.562055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.562600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.562639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.563201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.563713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.563725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.766 qpair failed and we were unable to recover it. 00:26:07.766 [2024-04-24 21:41:30.564198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.766 [2024-04-24 21:41:30.564670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.564709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.565288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.565823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.565835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.566273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.566752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.566791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.567320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.567722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.567767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.568300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.568877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.568916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.569411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.569930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.569968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.570528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.571030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.571068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.571616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.572070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.572081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.572536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.573022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.573060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.573620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.574097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.574135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.574691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.575197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.575236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.575838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.576367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.576406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.576937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.577520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.577560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.578071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.578593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.578608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.579114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.579635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.579674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.580214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.580770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.580782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.581290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.581841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.581880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.582280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.582823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.582863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.583259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.583830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.583870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.584351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.584794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.584833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.585395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.585950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.585989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.586546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.586988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.587026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.587592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.588115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.588154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.588663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.589178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.589222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.589825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.590340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.590378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.590936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.591497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.591537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.592042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.592502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.592540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.593082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.593646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.593686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.594198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.594649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.767 [2024-04-24 21:41:30.594662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.767 qpair failed and we were unable to recover it. 00:26:07.767 [2024-04-24 21:41:30.595148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.595716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.595756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.596298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.596876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.596889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.597342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.597755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.597795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.598321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.598887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.598926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.599503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.600023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.600061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.600615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.601069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.601107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.601631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.602119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.602157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.602749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.603237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.603276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.603763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.604167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.604179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.604646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.605128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.605167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.605716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.606251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.606289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.606877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.607378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.607417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.607948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.608408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.608446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.608985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.609497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.609537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.610022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.610553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.610592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.611167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.611687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.611727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.612275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.612729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.612768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.613287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.613751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.613763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.614260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.614823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.614863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.615389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.615887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.615926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.616465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.616997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.617010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.617417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.617910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.617950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.618511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.619038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.619076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.619594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.620130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.620168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.620757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.621337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.621375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.621958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.622470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.622510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.623078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.623616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.623656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.624206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.624739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.624778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.768 qpair failed and we were unable to recover it. 00:26:07.768 [2024-04-24 21:41:30.625308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.768 [2024-04-24 21:41:30.625784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.625796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.769 qpair failed and we were unable to recover it. 00:26:07.769 [2024-04-24 21:41:30.626208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.626745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.626784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.769 qpair failed and we were unable to recover it. 00:26:07.769 [2024-04-24 21:41:30.627315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.627833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.627874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.769 qpair failed and we were unable to recover it. 00:26:07.769 [2024-04-24 21:41:30.628260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.628733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.628773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.769 qpair failed and we were unable to recover it. 00:26:07.769 [2024-04-24 21:41:30.629363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.629879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.629892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.769 qpair failed and we were unable to recover it. 00:26:07.769 [2024-04-24 21:41:30.630305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.630802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.630841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.769 qpair failed and we were unable to recover it. 00:26:07.769 [2024-04-24 21:41:30.631388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.631884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.631923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.769 qpair failed and we were unable to recover it. 00:26:07.769 [2024-04-24 21:41:30.632486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.633023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.633061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.769 qpair failed and we were unable to recover it. 00:26:07.769 [2024-04-24 21:41:30.633605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.634140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.634201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.769 qpair failed and we were unable to recover it. 00:26:07.769 [2024-04-24 21:41:30.634816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.635333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.635376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.769 qpair failed and we were unable to recover it. 00:26:07.769 [2024-04-24 21:41:30.635987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.636467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.636499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.769 qpair failed and we were unable to recover it. 00:26:07.769 [2024-04-24 21:41:30.636990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.637465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.637505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.769 qpair failed and we were unable to recover it. 00:26:07.769 [2024-04-24 21:41:30.638097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.638669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.769 [2024-04-24 21:41:30.638683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:07.769 qpair failed and we were unable to recover it. 00:26:08.057 [2024-04-24 21:41:30.639096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.639505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.639520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.057 qpair failed and we were unable to recover it. 00:26:08.057 [2024-04-24 21:41:30.639987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.640466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.640481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.057 qpair failed and we were unable to recover it. 00:26:08.057 [2024-04-24 21:41:30.640955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.641390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.641404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.057 qpair failed and we were unable to recover it. 00:26:08.057 [2024-04-24 21:41:30.641813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.642193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.642209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.057 qpair failed and we were unable to recover it. 00:26:08.057 [2024-04-24 21:41:30.642637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.643141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.643164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.057 qpair failed and we were unable to recover it. 00:26:08.057 [2024-04-24 21:41:30.643707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.644155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.644176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.057 qpair failed and we were unable to recover it. 00:26:08.057 [2024-04-24 21:41:30.644724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.645266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.645285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.057 qpair failed and we were unable to recover it. 00:26:08.057 [2024-04-24 21:41:30.645769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.646284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.646306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.057 qpair failed and we were unable to recover it. 00:26:08.057 [2024-04-24 21:41:30.646834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.647327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.647348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.057 qpair failed and we were unable to recover it. 00:26:08.057 [2024-04-24 21:41:30.647884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.648387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.648407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.057 qpair failed and we were unable to recover it. 00:26:08.057 [2024-04-24 21:41:30.648895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.649392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.649411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.057 qpair failed and we were unable to recover it. 00:26:08.057 [2024-04-24 21:41:30.649957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.650476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.650496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.057 qpair failed and we were unable to recover it. 00:26:08.057 [2024-04-24 21:41:30.650913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.651354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.651373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.057 qpair failed and we were unable to recover it. 00:26:08.057 [2024-04-24 21:41:30.651852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.652345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.652362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.057 qpair failed and we were unable to recover it. 00:26:08.057 [2024-04-24 21:41:30.652831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.653259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.653275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.057 qpair failed and we were unable to recover it. 00:26:08.057 [2024-04-24 21:41:30.653765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.654223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.654239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.057 qpair failed and we were unable to recover it. 00:26:08.057 [2024-04-24 21:41:30.654751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.655165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.655180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.057 qpair failed and we were unable to recover it. 00:26:08.057 [2024-04-24 21:41:30.655690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.656145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.057 [2024-04-24 21:41:30.656160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.656622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.657038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.657052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.657501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.657913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.657928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.658437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.658855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.658869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.659327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.659731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.659745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.660246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.660599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.660612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.661025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.661502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.661515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.661645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b11260 is same with the state(5) to be set 00:26:08.058 [2024-04-24 21:41:30.662225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.662603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.662625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.663054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.663543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.663560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.664060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.664573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.664591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.665017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.665507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.665524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.665980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.666463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.666480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.666953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.667460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.667477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.668026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.668439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.668464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.668958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.669424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.669440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.669926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.670459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.670481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.671004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.671495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.671512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.671959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.672371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.672410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.672964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.673510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.673549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.673937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.674462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.674503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.675056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.675575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.675615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.676173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.676589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.676606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.677108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.677605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.677645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.678186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.678727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.678767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.679240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.679715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.679754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.680341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.680856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.680895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.681307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.681852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.681869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.682429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.683029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.683068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.058 [2024-04-24 21:41:30.683659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.684122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.058 [2024-04-24 21:41:30.684161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.058 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.684601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.685112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.685151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.685714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.686233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.686271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.686793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.687327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.687364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.687906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.688445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.688497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.689062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.689527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.689567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.690021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.690552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.690591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.691178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.691719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.691759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.692301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.692870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.692910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.693437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.693860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.693877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.694326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.694783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.694823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.695306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.695808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.695847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.696412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.696842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.696882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.697419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.697992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.698031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.698617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.699022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.699061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.699589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.700167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.700206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.700691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.701161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.701200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.701743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.702263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.702280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.702750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.703285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.703324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.703889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.704338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.704377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.704884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.705401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.705441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.706001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.706560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.706601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.707166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.707688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.707728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.708242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.708777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.708815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.709253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.709769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.709809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.710397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.710953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.710993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.711532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.712124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.712163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.712734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.713275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.713314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.713794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.714256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.714295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.059 qpair failed and we were unable to recover it. 00:26:08.059 [2024-04-24 21:41:30.714848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.059 [2024-04-24 21:41:30.715404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.715442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.715875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.716275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.716291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.716749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.717211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.717249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.717720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.718164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.718202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.718773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.719332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.719370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.719886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.720402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.720419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.720929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.721489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.721529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.721957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.722428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.722445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.722938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.723419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.723467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.724064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.724625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.724665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.725241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.725782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.725823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.726309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.726769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.726809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.727340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.727892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.727931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.728428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.728950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.728988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.729491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.729958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.729998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.730528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.730950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.730989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.731474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.731984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.732023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.732594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.733055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.733093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.733644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.734103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.734142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.734625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.735037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.735075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.735634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.736198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.736238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.736837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.737380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.737418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.737908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.738444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.738496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.739065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.739605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.739646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.740228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.740826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.740865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.741408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.741873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.741913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.742458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.743045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.743083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.743654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.744170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.744209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.744737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.745241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.060 [2024-04-24 21:41:30.745280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.060 qpair failed and we were unable to recover it. 00:26:08.060 [2024-04-24 21:41:30.745873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.746425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.746474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.747005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.747543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.747583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.748169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.748691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.748731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.749216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.749741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.749780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.750247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.750794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.750834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.751413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.751992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.752031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.752573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.753148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.753187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.753752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.754288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.754327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.754913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.755386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.755425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.755908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.756416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.756466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.756945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.757360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.757400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.757886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.758427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.758478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.759042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.759565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.759605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.760194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.760672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.760712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.761182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.761698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.761738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.762296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.762818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.762860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.763414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.763959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.764000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.764569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.765103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.765142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.765637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.766177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.766215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.766701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.767220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.767259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.767803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.768368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.768407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.768996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.769571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.769612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.770115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.770619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.770659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.061 [2024-04-24 21:41:30.771228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.771751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.061 [2024-04-24 21:41:30.771791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.061 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.772302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.772827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.772867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.773290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.773722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.773762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.774288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.774720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.774737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.775235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.775722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.775761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.776309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.776871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.776911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.777493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.778070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.778109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.778696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.779278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.779317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.779900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.780439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.780487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.781041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.781574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.781614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.782186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.782711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.782752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.783305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.783826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.783865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.784444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.784967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.785006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.785487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.786020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.786059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.786629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.787115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.787154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.787612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.788145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.788182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.788767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.789229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.789268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.789718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.790174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.790213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.790759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.791351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.791396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.791963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.792477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.792517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.793007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.793563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.793602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.794190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.794766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.794805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.795370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.795882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.795922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.796414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.796938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.796977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.797531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.798046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.798085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.798630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.799165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.799203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.799750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.800307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.800346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.800995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.801560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.801601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.062 [2024-04-24 21:41:30.802079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.802534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.062 [2024-04-24 21:41:30.802580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.062 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.803095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.803573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.803590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.804093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.804658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.804697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.805291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.805690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.805729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.806195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.806724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.806780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.807349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.807871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.807911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.808402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.808926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.808967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.809516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.809980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.810018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.810539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.811081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.811119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.811669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.812203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.812242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.812771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.813288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.813333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.813902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.814469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.814508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.815018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.815507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.815547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.816041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.816576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.816615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.817112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.817645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.817686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.818225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.818759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.818799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.819278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.819734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.819774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.820326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.820885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.820924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.821505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.821984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.822022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.822607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.823116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.823153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.823738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.824336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.824374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.824864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.825280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.825319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.825791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.826327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.826367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.826895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.827367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.827406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.828002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.828469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.828508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.829041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.829618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.829657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.830233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.830706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.830747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.831222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.831710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.831750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.832245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.832757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.063 [2024-04-24 21:41:30.832797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.063 qpair failed and we were unable to recover it. 00:26:08.063 [2024-04-24 21:41:30.833379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.833914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.833954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.834505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.835026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.835065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.835539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.835945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.835991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.836430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.837004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.837042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.837522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.837935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.837974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.838530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.839017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.839056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.839602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.840149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.840188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.840735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.841304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.841355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.841774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.842209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.842247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.842789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.843273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.843313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.843837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.844398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.844439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.844987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.845522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.845563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.846145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.846603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.846621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.847114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.847516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.847555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.848093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.848639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.848678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.849221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.849799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.849839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.850375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.850912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.850952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.851527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.852087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.852127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.852692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.853250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.853289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.853774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.854310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.854349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.854922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.855471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.855511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.855996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.856471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.856511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.857094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.857550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.857590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.857967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.858507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.858546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.858997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.859475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.859515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.860074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.860547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.860587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.861070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.861551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.861592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.862035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.862587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.862604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.064 [2024-04-24 21:41:30.863033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.863523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.064 [2024-04-24 21:41:30.863541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.064 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.864040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.864552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.864569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.865086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.865545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.865561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.866103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.866535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.866551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.867054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.867516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.867534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.868052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.868563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.868580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.869093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.869555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.869572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.870069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.870556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.870573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.870988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.871463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.871480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.871856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.872345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.872362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.872770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.873215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.873232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.873723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.874090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.874107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.874573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.874982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.874998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.875447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.875941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.875958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.876465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.876957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.876974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.877502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.877938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.877955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.878404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.878825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.878842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.879268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.879750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.879767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.880236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.880764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.880781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.881306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.881813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.881830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.882269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.882693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.882710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.883203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.883712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.883729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.884151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.884670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.884711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.885207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.885663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.885680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.886145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.886657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.886697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.887188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.887722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.887762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.888352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.888799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.888817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.889308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.889831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.889871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.890339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.890870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.890909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.065 qpair failed and we were unable to recover it. 00:26:08.065 [2024-04-24 21:41:30.891498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.892094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.065 [2024-04-24 21:41:30.892133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.892621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.893139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.893179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.893620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.894103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.894119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.894638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.895137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.895176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.895714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.896241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.896280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.896854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.897310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.897349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.897870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.898405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.898444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.898930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.899276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.899293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.899725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.900260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.900277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.900701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.901125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.901164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.901671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.902187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.902225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.902695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.903233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.903271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.903641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.904117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.904155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.904744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.905262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.905301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.905847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.906308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.906347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.906810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.907301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.907352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.907770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.908254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.908271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.908686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.908902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.908940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.909490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.910025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.910064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.910607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.911145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.911183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.911692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.912101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.912118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.912562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.913008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.913047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.913573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.914154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.914193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.914740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.915248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.915287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.915836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.916299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.066 [2024-04-24 21:41:30.916338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.066 qpair failed and we were unable to recover it. 00:26:08.066 [2024-04-24 21:41:30.916880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.917418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.917485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.917929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.918394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.918432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.918945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.919358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.919374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.919833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.920345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.920384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.920933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.921472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.921512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.922033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.922529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.922569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.923048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.923549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.923565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.924062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.924280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.924318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.924782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.925265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.925302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.925838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.926295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.926335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.926798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.927365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.927404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.927933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.928504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.928544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.929032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.929568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.929608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.930097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.930551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.930590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.931075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.931554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.931593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.932088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.932547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.932588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.933139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.933542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.933559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.933989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.934441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.934493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.934970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.935416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.935438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.935893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.936280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.936297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.936740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.937166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.937191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.937662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.938127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.938144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.067 [2024-04-24 21:41:30.938552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.938959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.067 [2024-04-24 21:41:30.938981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.067 qpair failed and we were unable to recover it. 00:26:08.331 [2024-04-24 21:41:30.939478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.331 [2024-04-24 21:41:30.939809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.331 [2024-04-24 21:41:30.939826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.331 qpair failed and we were unable to recover it. 00:26:08.331 [2024-04-24 21:41:30.940317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.331 [2024-04-24 21:41:30.940645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.331 [2024-04-24 21:41:30.940662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.331 qpair failed and we were unable to recover it. 00:26:08.331 [2024-04-24 21:41:30.941172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.331 [2024-04-24 21:41:30.941528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.331 [2024-04-24 21:41:30.941568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.331 qpair failed and we were unable to recover it. 00:26:08.331 [2024-04-24 21:41:30.942108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.331 [2024-04-24 21:41:30.942635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.331 [2024-04-24 21:41:30.942675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.331 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.943249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.943734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.943750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.944131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.944538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.944577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.945044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.945543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.945582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.946009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.946521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.946568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.947088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.947525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.947565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.948104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.948627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.948643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.949127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.949516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.949553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.949954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.950470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.950510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.951042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.951546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.951586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.952075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.952473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.952513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.953001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.953530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.953547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.954033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.954392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.954430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.954991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.955543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.955559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.956047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.956460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.956510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.956980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.957515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.957558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.958088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.958524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.958564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.958960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.959477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.959517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.960102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.960503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.960547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.960966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.961442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.961492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.962001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.962477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.962518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.962990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.963517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.963568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.964044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.964508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.964547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.965056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.965508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.965547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.966029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.966470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.966517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.967034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.967556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.967596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.968131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.968646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.968685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.969224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.969686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.969726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.970172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.970697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.970737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.332 qpair failed and we were unable to recover it. 00:26:08.332 [2024-04-24 21:41:30.971247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.332 [2024-04-24 21:41:30.971749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.971790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.972246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.972698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.972738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.973225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.973672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.973711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.974167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.974688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.974727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.975237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.975730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.975770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.976286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.976767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.976818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.977335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.977872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.977912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.978279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.978800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.978839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.979318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.979760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.979799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.980268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.980749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.980788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.981301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.981794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.981833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.982350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.982871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.982911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.983444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.983933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.983971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.984484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.985011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.985050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.985513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.986039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.986078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.986363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.986815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.986855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.987342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.987898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.987937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.988427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.988913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.988929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.989412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.989887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.989903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.990361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.990840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.990857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.991314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.991747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.991772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.992252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.992677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.992700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.993216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.993590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.993607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.994057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.994426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.994443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.994877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.995259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.995276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.995698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.996051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.996067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.996526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.996923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.996961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.997490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.997962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.998001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.333 [2024-04-24 21:41:30.998516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.998981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.333 [2024-04-24 21:41:30.999019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.333 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:30.999474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.000006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.000044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.000579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.001052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.001090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.001608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.002165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.002203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.002734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.003272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.003311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.003846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.004405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.004444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.004951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.005312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.005368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.005763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.006216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.006256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.006793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.007269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.007308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.007825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.008251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.008289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.008750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.009249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.009287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.009748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.010245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.010283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.010756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.011287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.011325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.011837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.012112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.012151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.012679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.013128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.013166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.013619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.014044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.014082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.014606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.015153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.015191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.015721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.016241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.016279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.016742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.017273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.017312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.017837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.018342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.018387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.018706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.019162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.019200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.019729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.020251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.020289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.020766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.021284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.021322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.021756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.022301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.022339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.022877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.023161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.023199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.023587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.024036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.024074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.024361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.024871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.024911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.025359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.025895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.025911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.026369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.026835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.026851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.334 qpair failed and we were unable to recover it. 00:26:08.334 [2024-04-24 21:41:31.027251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.334 [2024-04-24 21:41:31.027633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.027673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.028199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.028710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.028726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.029157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.029593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.029638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.030043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.030528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.030568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.031051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.031482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.031522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.032002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.032464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.032504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.032786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.033229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.033267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.033749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.034243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.034281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.034829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.035350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.035388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.035891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.036387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.036429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.036848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.037366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.037404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.037922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.038447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.038498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.038968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.039433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.039483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.039931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.040409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.040447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.040927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.041380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.041427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.041849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.042278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.042317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.042849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.043139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.043155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.043565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.044020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.044059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.044519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.045063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.045102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.045573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.046093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.046131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.046652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.046858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.046875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.047279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.047710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.047750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.048206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.048641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.048680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.049179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.049567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.049607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.050118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.050639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.050679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.051181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.051726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.051765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.052259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.052779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.052819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.053273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.053747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.053788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.054228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.054669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.335 [2024-04-24 21:41:31.054709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.335 qpair failed and we were unable to recover it. 00:26:08.335 [2024-04-24 21:41:31.055174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.055558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.055598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.056076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.056570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.056586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.057087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.057616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.057656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.058122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.058645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.058685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.059180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.059628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.059669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.060123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.060645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.060685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.061100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.061533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.061574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.062031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.062502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.062542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.063002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.063381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.063419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.063867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.064387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.064427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.064942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.065366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.065405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.065932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.066431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.066481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.066933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.067464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.067503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.068029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.068397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.068413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.068839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.069231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.069270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.069798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.070346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.070386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.070874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.071307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.071346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.071818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.072250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.072289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.072762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.073202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.073241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.073772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.074225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.074264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.074706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.075214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.075254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.075653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.076015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.076054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.076465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.076989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.077028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.336 qpair failed and we were unable to recover it. 00:26:08.336 [2024-04-24 21:41:31.077499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.077955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.336 [2024-04-24 21:41:31.077993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.078466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.078927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.078971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.079447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.079782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.079821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.080301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.080737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.080782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.081186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.081642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.081682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.082190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.082627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.082667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.083111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.083517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.083556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.084086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.084548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.084588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.085117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.085553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.085593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.086108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.086647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.086687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.087141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.087587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.087627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.088134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.088581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.088621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.089077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.089518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.089558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.090062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.090512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.090552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.091033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.091478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.091519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.092028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.092475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.092514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.092709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.093144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.093182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.093641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.094101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.094117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.094513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.095018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.095056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.095600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.096104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.096143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.096591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.097053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.097092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.097542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.097989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.098028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.098493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.098956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.098994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.099386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.099859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.099898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.100423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.100830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.100869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.101253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.101513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.101553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.102091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.102614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.102654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.103181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.103622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.103668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.104098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.104510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.104550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.337 qpair failed and we were unable to recover it. 00:26:08.337 [2024-04-24 21:41:31.105006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.337 [2024-04-24 21:41:31.105403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.105441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.105695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.106165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.106203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.106617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.107141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.107179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.107633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.108025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.108069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.108516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.108998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.109036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.109592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.110036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.110075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.110598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.111097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.111136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.111657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.112154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.112193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.112638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.113070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.113114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.113501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.113952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.113990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.114466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.114980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.115019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.115553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.115846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.115885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.116337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.116834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.116879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.117332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.117788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.117828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.118380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.118783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.118822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.119258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.119685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.119724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.120118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.120517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.120534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.120933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.121398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.121436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.121909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.122348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.122370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.122760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.123084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.123100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.123439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.123843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.123882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.124378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.124827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.124880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.125339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.125771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.125811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.126252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.126653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.126703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.127117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.127639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.127679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.128086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.128530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.128569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.129021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.129473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.129513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.129966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.130472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.130512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.130910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.131287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.338 [2024-04-24 21:41:31.131331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.338 qpair failed and we were unable to recover it. 00:26:08.338 [2024-04-24 21:41:31.131762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.132260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.132298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.132690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.133050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.133089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.133421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.133830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.133869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.134310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.134692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.134731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.135259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.135725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.135765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.136207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.136579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.136619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.137148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.137601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.137618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.138022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.138414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.138462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.138929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.139323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.139360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.139763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.140200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.140239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.140609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.141069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.141115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.141504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.141842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.141880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.142331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.142779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.142819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.143224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.143671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.143712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.144238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.144664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.144704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.145152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.145669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.145708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.146158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.146547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.146563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.146995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.147516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.147555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.148019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.148461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.148501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.148887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.149328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.149343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.149757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.150233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.150249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.150659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.151117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.151155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.151684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.152107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.152123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.152528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.152983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.153021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.153414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.153920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.153937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.154351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.154754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.154794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.155260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.155689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.155728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.156181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.156397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.156435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.339 [2024-04-24 21:41:31.156925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.157321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.339 [2024-04-24 21:41:31.157359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.339 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.158900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.159332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.159351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.159688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.160098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.160137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.160608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.161053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.161092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.161532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.163184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.163211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.163652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.164049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.164089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.164551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.165052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.165091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.165470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.165920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.165960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.166509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.166944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.166983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.167424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.167775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.167791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.168130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.168572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.168612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.169072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.169528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.169567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.170015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.170203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.170241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.171586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.171955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.171998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.172490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.172930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.172969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.173463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.173916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.173954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.174471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.174915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.174953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.175404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.175910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.175951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.176344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.176757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.176797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.177230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.177616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.177632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.178086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.178491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.178507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.178838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.179228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.179244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.179641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.180110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.180126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.180527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.180922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.180938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.181416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.181817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.181834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.182288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.182629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.182646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.183123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.183486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.183503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.340 qpair failed and we were unable to recover it. 00:26:08.340 [2024-04-24 21:41:31.184000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.184330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.340 [2024-04-24 21:41:31.184346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.184733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.185204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.185220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.185615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.185974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.185990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.186471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.186821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.186836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.187308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.187708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.187725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.188055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.188379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.188395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.188799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.189271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.189287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.189671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.190056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.190072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.190527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.190934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.190949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.191292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.191753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.191770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.192243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.192583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.192600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.193079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.193477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.193493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.193965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.194364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.194379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.194693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.195141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.195157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.195633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.196081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.196097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.196429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.196817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.196834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.197310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.197620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.197637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.197970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.198380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.198396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.198614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.199027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.199043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.199502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.199972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.199988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.200374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.200713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.200730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.201133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.201586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.201602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.201985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.202388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.202404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.202880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.203273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.203289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.203706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.204100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.204116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.204436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.204915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.204931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.341 qpair failed and we were unable to recover it. 00:26:08.341 [2024-04-24 21:41:31.205333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.205777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.341 [2024-04-24 21:41:31.205793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.342 qpair failed and we were unable to recover it. 00:26:08.342 [2024-04-24 21:41:31.206185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.342 [2024-04-24 21:41:31.206602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.342 [2024-04-24 21:41:31.206618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.342 qpair failed and we were unable to recover it. 00:26:08.342 [2024-04-24 21:41:31.207000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.342 [2024-04-24 21:41:31.207420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.342 [2024-04-24 21:41:31.207436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.342 qpair failed and we were unable to recover it. 00:26:08.342 [2024-04-24 21:41:31.207854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.342 [2024-04-24 21:41:31.208327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.342 [2024-04-24 21:41:31.208343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.342 qpair failed and we were unable to recover it. 00:26:08.342 [2024-04-24 21:41:31.208737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.342 [2024-04-24 21:41:31.209184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.342 [2024-04-24 21:41:31.209209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.342 qpair failed and we were unable to recover it. 00:26:08.342 [2024-04-24 21:41:31.209602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.342 [2024-04-24 21:41:31.210052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.342 [2024-04-24 21:41:31.210068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.342 qpair failed and we were unable to recover it. 00:26:08.342 [2024-04-24 21:41:31.210479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.342 [2024-04-24 21:41:31.210950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.342 [2024-04-24 21:41:31.210969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.342 qpair failed and we were unable to recover it. 00:26:08.342 [2024-04-24 21:41:31.211402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.342 [2024-04-24 21:41:31.211864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.342 [2024-04-24 21:41:31.211881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.342 qpair failed and we were unable to recover it. 00:26:08.342 [2024-04-24 21:41:31.212343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.342 [2024-04-24 21:41:31.212843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.342 [2024-04-24 21:41:31.212897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.342 qpair failed and we were unable to recover it. 00:26:08.342 [2024-04-24 21:41:31.213390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.606 [2024-04-24 21:41:31.213880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.606 [2024-04-24 21:41:31.213898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.606 qpair failed and we were unable to recover it. 00:26:08.606 [2024-04-24 21:41:31.214303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.606 [2024-04-24 21:41:31.214688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.606 [2024-04-24 21:41:31.214705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.606 qpair failed and we were unable to recover it. 00:26:08.606 [2024-04-24 21:41:31.215165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.606 [2024-04-24 21:41:31.215673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.606 [2024-04-24 21:41:31.215713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.606 qpair failed and we were unable to recover it. 00:26:08.606 [2024-04-24 21:41:31.216152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.606 [2024-04-24 21:41:31.216552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.606 [2024-04-24 21:41:31.216569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.606 qpair failed and we were unable to recover it. 00:26:08.606 [2024-04-24 21:41:31.216974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.606 [2024-04-24 21:41:31.217474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.606 [2024-04-24 21:41:31.217514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.606 qpair failed and we were unable to recover it. 00:26:08.606 [2024-04-24 21:41:31.218044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.606 [2024-04-24 21:41:31.218476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.606 [2024-04-24 21:41:31.218516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.606 qpair failed and we were unable to recover it. 00:26:08.606 [2024-04-24 21:41:31.218965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.606 [2024-04-24 21:41:31.219485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.606 [2024-04-24 21:41:31.219524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.606 qpair failed and we were unable to recover it. 00:26:08.606 [2024-04-24 21:41:31.219930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.606 [2024-04-24 21:41:31.220332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.606 [2024-04-24 21:41:31.220371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.606 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.220823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.221257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.221296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.221701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.222152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.222191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.222692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.223032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.223063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.223539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.224057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.224095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.224632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.225053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.225092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.225564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.226010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.226048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.226436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.226949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.226988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.227460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.227927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.227974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.228306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.228698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.228737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.229250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.229631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.229671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.230145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.230605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.230649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.231105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.231491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.231549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.232058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.232499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.232539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.233096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.233611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.233650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.234080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.234510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.234550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.234936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.235461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.235500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.235959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.236463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.236503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.237027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.237469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.237508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.237963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.238336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.238352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.238698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.239004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.239050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.239578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.240123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.240161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.240717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.241110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.241149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.241600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.241830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.241869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.242403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.242891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.242930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.243438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.243975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.244015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.244473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.244920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.244959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.245405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.245850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.245890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.246374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.246760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.246799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.607 qpair failed and we were unable to recover it. 00:26:08.607 [2024-04-24 21:41:31.247065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.607 [2024-04-24 21:41:31.247442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.247491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.247937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.248364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.248403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.248851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.249297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.249336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.249778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.250181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.250220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.250686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.251115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.251160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.251627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.251965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.252003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.252375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.252757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.252796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.253183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.253626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.253665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.254126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.254568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.254608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.255116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.255554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.255593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.256052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.256477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.256516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.256959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.257466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.257506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.257899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.258262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.258301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.258807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.259238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.259277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.259779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.260299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.260343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.260729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.261129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.261168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.261560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.262008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.262046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.262575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.263014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.263053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.263521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.263965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.264003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.264478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.264915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.264953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.265409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.265818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.265858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.266229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.266699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.266716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.266879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.267035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.267052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.267458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.267847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.267864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.268256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.268662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.268707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.269087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.269517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.269556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.269951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.270381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.270419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.270838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.271237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.271275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.271710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.272166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.608 [2024-04-24 21:41:31.272205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.608 qpair failed and we were unable to recover it. 00:26:08.608 [2024-04-24 21:41:31.272709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.273154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.273192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.273575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.274023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.274063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.274460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.274655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.274693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.275135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.275573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.275612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.276117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.276590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.276629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.277082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.277518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.277558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.278022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.278522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.278562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.278997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.279434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.279485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.279932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.280317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.280356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.280588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.280940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.280979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.281356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.281872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.281911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.282419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.282626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.282642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.283100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.283504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.283544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.283925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.284424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.284474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.284907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.285374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.285413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.285866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.286295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.286333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.286841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.287222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.287260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.287653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.288103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.288142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.288586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.288944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.288983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.289473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.289942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.289981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.290494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.290939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.290978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.291385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.291869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.291907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.292414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.292803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.292842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.293238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.293680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.293720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.294109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.294484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.294501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.294831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.295200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.295238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.295749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.296138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.296184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.296608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.297109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.609 [2024-04-24 21:41:31.297146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.609 qpair failed and we were unable to recover it. 00:26:08.609 [2024-04-24 21:41:31.297588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.298046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.298083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.298581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.299069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.299085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.299497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.299877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.299915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.300367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.300866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.300905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.301380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.301816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.301856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.302249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.302598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.302637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.303086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.303286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.303324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.303772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.304155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.304193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.304696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.305217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.305255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.305800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.306262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.306310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.306708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.307118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.307156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.307621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.308078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.308116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.308492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.308959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.308976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.309489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.310041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.310079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.310482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.310750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.310788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.311168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.311603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.311643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.312177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.312614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.312653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.313046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.313422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.313487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.314047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.314540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.314580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.315085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.315598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.315637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.316036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.316487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.316527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.317059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.317500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.317540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.318051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.318438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.318493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.319002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.319395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.319434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.319898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.320338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.320376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.320851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.321305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.321343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.321742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.322240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.322278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.322743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.323123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.323162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.610 qpair failed and we were unable to recover it. 00:26:08.610 [2024-04-24 21:41:31.323568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.323993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.610 [2024-04-24 21:41:31.324031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.324587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.325105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.325144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.325587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.326035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.326073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.326506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.326907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.326946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.327465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.327919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.327957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.328435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.328655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.328694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.329201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.329585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.329625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.330002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.330498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.330538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.331074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.331569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.331608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.332076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.332547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.332585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.333046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.333494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.333533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.333921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.334424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.334471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.334917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.335356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.335395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.335861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.336301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.336340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.336769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.337149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.337187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.337396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.337783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.337823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.338268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.338653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.338693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.339075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.339578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.339618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.340149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.340671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.340710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.341152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.341549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.341588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.342085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.342589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.342605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.343057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.343469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.343509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.343923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.344356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.344394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.611 qpair failed and we were unable to recover it. 00:26:08.611 [2024-04-24 21:41:31.344758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.345184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.611 [2024-04-24 21:41:31.345222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.345664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.346165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.346203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.346606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.346970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.347009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.347447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.347937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.347975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.348440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.348897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.348936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.349438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.349703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.349742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.350201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.350644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.350684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.351161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.351663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.351703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.352155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.352632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.352671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.353205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.353607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.353646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.354176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.354627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.354667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.355070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.355520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.355559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.356014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.356556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.356595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.356992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.357439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.357486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.357939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.358411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.358449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.358920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.359298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.359337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.359879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.360281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.360297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.360717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.361184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.361223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.361654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.362101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.362140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.362598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.362993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.363031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.363561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.363938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.363977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.364429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.364801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.364840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.365284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.365706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.365745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.366190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.366633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.366673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.367137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.367572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.367612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.367842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.368292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.368331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.368859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.369293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.369331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.369774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.370247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.370286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.612 [2024-04-24 21:41:31.370682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.371118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.612 [2024-04-24 21:41:31.371156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.612 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.371421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.371877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.371917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.372392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.372843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.372883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.373275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.373702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.373742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.374137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.374644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.374661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.374992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.375200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.375216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.375610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.375984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.376023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.376419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.376923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.376964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.377475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.377862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.377900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.378370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.378822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.378861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.379258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.379654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.379694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.380095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.380557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.380596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.381058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.381558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.381574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.381917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.382297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.382344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.382780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.383204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.383242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.383687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.384077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.384116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.384609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.385115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.385131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.385360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.385677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.385717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.386118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.386558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.386604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.386930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.387252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.387270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.387686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.388121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.388160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.388669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.389068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.389084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.389503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3003262 Killed "${NVMF_APP[@]}" "$@" 00:26:08.613 [2024-04-24 21:41:31.389901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.389917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.390264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 21:41:31 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:26:08.613 [2024-04-24 21:41:31.390650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.390667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 21:41:31 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:08.613 [2024-04-24 21:41:31.391063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 21:41:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:08.613 21:41:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:08.613 [2024-04-24 21:41:31.391470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.391487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 21:41:31 -- common/autotest_common.sh@10 -- # set +x 00:26:08.613 [2024-04-24 21:41:31.391843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.392229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.392245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.392583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.392952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.392968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.613 qpair failed and we were unable to recover it. 00:26:08.613 [2024-04-24 21:41:31.393319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.613 [2024-04-24 21:41:31.393768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.393784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.394208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.394620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.394636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.394971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.395421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.395437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.395784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.396181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.396196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.396616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.396955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.396971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.397423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.397849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.397866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.398015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.398416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.398432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.398834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.399239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.399255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 21:41:31 -- nvmf/common.sh@470 -- # nvmfpid=3004144 00:26:08.614 [2024-04-24 21:41:31.399657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 21:41:31 -- nvmf/common.sh@471 -- # waitforlisten 3004144 00:26:08.614 21:41:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:08.614 [2024-04-24 21:41:31.399972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.399989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 21:41:31 -- common/autotest_common.sh@817 -- # '[' -z 3004144 ']' 00:26:08.614 [2024-04-24 21:41:31.400372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 21:41:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.614 21:41:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:08.614 [2024-04-24 21:41:31.400770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.400787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 21:41:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.614 [2024-04-24 21:41:31.401019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 21:41:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:08.614 [2024-04-24 21:41:31.401351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.401367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 21:41:31 -- common/autotest_common.sh@10 -- # set +x 00:26:08.614 [2024-04-24 21:41:31.401752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.402157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.402174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.402561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.402945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.402961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.403463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.403799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.403816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.404201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.404601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.404617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.405005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.405430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.405446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.405852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.406002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.406019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.406431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.406855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.406871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.407263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.407715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.407731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.408062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.408523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.408540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.408886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.409221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.409237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.409588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.409998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.410014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.410472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.410855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.410871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.411327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.411716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.411733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.412144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.412558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.412574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.412910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.413267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.413283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.614 qpair failed and we were unable to recover it. 00:26:08.614 [2024-04-24 21:41:31.413680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.614 [2024-04-24 21:41:31.414099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.414115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.414461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.414882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.414898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.415320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.415470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.415486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.415887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.416310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.416328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.416671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.417147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.417163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.417555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.417884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.417900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.418289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.418691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.418707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.419156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.419558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.419574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.419966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.420355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.420371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.420761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.421141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.421157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.421495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.421890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.421906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.422386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.422768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.422784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.423203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.423548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.423565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.424014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.424423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.424442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.424816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.425272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.425288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.425703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.426097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.426113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.426460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.426867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.426883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.427336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.427830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.427847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.428248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.428703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.428720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.429068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.429483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.429500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.429976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.430430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.430446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.430894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.431391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.431407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.431841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.432189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.432205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.432684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.433087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.433106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.433553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.433938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.433954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.434441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.434841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.434858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.615 qpair failed and we were unable to recover it. 00:26:08.615 [2024-04-24 21:41:31.435324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.615 [2024-04-24 21:41:31.435677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.435693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.436079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.436234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.436250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.436597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.437060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.437076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.437470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.437869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.437886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.438288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.438691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.438707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.439136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.440325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.440352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.440792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.441203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.441219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.441618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.442029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.442045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.442446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.442795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.442811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.443265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.443585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.443601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.444078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.444415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.444431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.444784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.445180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.445197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.445554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.445943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.445959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.446385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.446735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.446751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.447139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.447484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.447500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.447846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.448181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.448197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.448597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.448928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.448943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.449340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.449334] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:26:08.616 [2024-04-24 21:41:31.449377] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.616 [2024-04-24 21:41:31.449505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.449521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.449930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.450334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.450350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.450818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.451213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.451229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.451681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.452068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.452084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.452410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.452768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.452785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.453182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.453522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.453538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.453932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.454333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.454349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.454759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.455227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.455244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.616 [2024-04-24 21:41:31.455712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.456137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.616 [2024-04-24 21:41:31.456153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.616 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.456581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.457035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.457051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.457458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.457853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.457869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.458297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.458713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.458730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.459115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.459503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.459520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.459927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.460314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.460330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.460670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.461056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.461072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.461500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.461908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.461924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.462306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.462636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.462652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.463105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.463491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.463508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.463886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.464280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.464296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.464751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.465076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.465092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.465576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.465882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.465899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.466351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.466685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.466702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.467097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.467418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.467434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.467907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.468287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.468303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.468679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.469061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.469077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.469461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.469864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.469880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.470279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.470670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.470687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.471042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.471382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.471398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.471726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.472117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.472133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.472587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.472760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.472776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.473325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.473843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.473864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.474326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.474731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.474750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.475153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.475557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.475573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.475982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.476312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.476327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.476660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.477060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.477076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.477421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.477874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.477890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.478351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.478738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.478754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.617 qpair failed and we were unable to recover it. 00:26:08.617 [2024-04-24 21:41:31.479141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.617 [2024-04-24 21:41:31.479492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.479509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.618 qpair failed and we were unable to recover it. 00:26:08.618 [2024-04-24 21:41:31.479845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.480231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.480247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.618 qpair failed and we were unable to recover it. 00:26:08.618 [2024-04-24 21:41:31.480732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.481072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.481088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.618 qpair failed and we were unable to recover it. 00:26:08.618 [2024-04-24 21:41:31.481480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.481887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.481903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.618 qpair failed and we were unable to recover it. 00:26:08.618 [2024-04-24 21:41:31.482239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.482690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.482706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.618 qpair failed and we were unable to recover it. 00:26:08.618 [2024-04-24 21:41:31.483046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.483501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.483518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.618 qpair failed and we were unable to recover it. 00:26:08.618 [2024-04-24 21:41:31.483954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.484339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.484355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.618 qpair failed and we were unable to recover it. 00:26:08.618 [2024-04-24 21:41:31.484704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.485105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.485122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.618 qpair failed and we were unable to recover it. 00:26:08.618 [2024-04-24 21:41:31.485448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.485848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.485865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.618 qpair failed and we were unable to recover it. 00:26:08.618 [2024-04-24 21:41:31.486203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.618 [2024-04-24 21:41:31.486601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.486618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.618 qpair failed and we were unable to recover it. 00:26:08.618 [2024-04-24 21:41:31.486958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.487367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.618 [2024-04-24 21:41:31.487383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.618 qpair failed and we were unable to recover it. 00:26:08.882 [2024-04-24 21:41:31.487836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.882 [2024-04-24 21:41:31.488284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.882 [2024-04-24 21:41:31.488301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.882 qpair failed and we were unable to recover it. 00:26:08.882 [2024-04-24 21:41:31.488614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.489007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.489023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.489496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.489853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.489870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.490198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.490519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.490536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.491016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.491477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.491494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.491925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.492323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.492338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.492686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.493017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.493033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.493487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.493828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.493844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.494244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.494664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.494680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.495077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.495550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.495566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.495964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.496440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.496461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.496861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.497326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.497342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.497680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.498068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.498086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.498490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.498832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.498848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.499254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.499591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.499608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.500025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.500454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.500470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.500937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.501388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.501404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.501750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.502200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.502215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.502612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.503062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.503078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.503432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.503860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.503877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.504330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.504780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.504796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.505215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.505665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.505681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.506088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.506425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.506441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.506852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.507183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.507199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.507652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.508054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.508070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.508558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.508957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.508973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.509357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.509831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.509847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.883 qpair failed and we were unable to recover it. 00:26:08.883 [2024-04-24 21:41:31.510259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.510572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.883 [2024-04-24 21:41:31.510588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.510914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.511310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.511326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.511540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.512014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.512030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.512434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.512785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.512801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.513139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.513529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.513545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.513947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.514362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.514378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.514856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.515240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.515256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.515588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.516037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.516053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.516383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.516834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.516850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.517194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.517539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.517555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.517984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.518370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.518386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.518798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.519001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.519017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.519348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.519781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.519797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.520194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.520644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.520660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.521063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.521388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.521404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.521904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.522353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.522369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.522789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.523238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.523254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.523659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.524057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.524073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.524479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.524943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.524959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.525386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.525792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.525808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.526125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.526514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.526529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.527001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.527517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.527533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.527951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.528334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.528350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.528749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.529198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.529214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.529719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.530041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.530056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.530382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.530831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.530847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.531239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.531446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.531485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.531802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.532129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.532145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.532569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.533016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.533032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.884 qpair failed and we were unable to recover it. 00:26:08.884 [2024-04-24 21:41:31.533364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.884 [2024-04-24 21:41:31.533710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.533726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.534124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.534445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.534465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.534889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.535228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.535244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.535650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.536040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.536056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.536443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.536796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.536812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.537212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.537603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.537619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.537950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.538286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.538302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.538712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.538916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.538935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.539346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.539728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.539745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.539867] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:08.885 [2024-04-24 21:41:31.540129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.540515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.540532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.541007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.541408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.541424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.541836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.542240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.542256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.542591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.542980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.542996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.543380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.543766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.543783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.544254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.544574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.544591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.545071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.545544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.545561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.545902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.546299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.546316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.546659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.546999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.547018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.547357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.547809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.547827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.547992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.548412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.548430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.548756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.549158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.549176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.549570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.550021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.550038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.550379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.550765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.550781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.551183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.551622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.551638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.552043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.552493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.552509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.552985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.553304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.553320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.553731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.554166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.554183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.554610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.554943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.554962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.885 [2024-04-24 21:41:31.555365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.555780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.885 [2024-04-24 21:41:31.555796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.885 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.556120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.556526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.556544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.556933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.557249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.557266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.557721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.558038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.558054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.558437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.558754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.558770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.559240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.559714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.559731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.560130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.560529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.560546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.560995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.561370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.561386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.561766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.562221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.562237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.562588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.563061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.563078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.563434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.563784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.563801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.564200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.564582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.564599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.565052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.565442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.565463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.565697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.566087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.566103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.566574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.566918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.566934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.567339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.567733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.567750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.568085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.568565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.568582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.568977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.569330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.569345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.569770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.570247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.570264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.571342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.571778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.571798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.572222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.572621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.572638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.572975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.573375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.573390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.573802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.574140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.574156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.574559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.574888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.574904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.575297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.575683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.575699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.576190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.576578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.576595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.576985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.577377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.577393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.577778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.578163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.578179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.578635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.578971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.886 [2024-04-24 21:41:31.578988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.886 qpair failed and we were unable to recover it. 00:26:08.886 [2024-04-24 21:41:31.579392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.579623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.579641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.579983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.580325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.580342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.580676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.581147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.581164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.581414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.581749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.581766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.581997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.582231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.582248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.582606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.583079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.583097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.583430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.583892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.583909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.584242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.584635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.584652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.584983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.585444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.585467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.585893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.586280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.586297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.586627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.586791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.586807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.587200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.587433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.587454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.587776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.588169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.588186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.588533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.589000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.589016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.589415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.589910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.589927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.590258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.590679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.590695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.591034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.591434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.591458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.591810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.592015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.592031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.592430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.592763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.592779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.593232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.593707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.593723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.594125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.594601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.594618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.595043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.595221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.595239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.595648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.596129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.596144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.887 qpair failed and we were unable to recover it. 00:26:08.887 [2024-04-24 21:41:31.596550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.596879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.887 [2024-04-24 21:41:31.596895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.597318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.597711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.597727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.598046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.598466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.598482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.598869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.599025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.599042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.599432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.599919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.599936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.600074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.600448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.600468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.600805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.601135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.601150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.601599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.602071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.602088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.602486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.602903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.602919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.603255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.603401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.603416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.603798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.604246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.604263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.604417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.604811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.604827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.604964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.605361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.605377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.605852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.606251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.606267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.606722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.607111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.607127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.607477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.607951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.607966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.608401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.608894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.608911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.609387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.609787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.609805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.610133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.610521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.610539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.610879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.611268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.611285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.611634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.611838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.611853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.612049] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.888 [2024-04-24 21:41:31.612082] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.888 [2024-04-24 21:41:31.612092] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:08.888 [2024-04-24 21:41:31.612101] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:08.888 [2024-04-24 21:41:31.612108] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.888 [2024-04-24 21:41:31.612265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.612228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:08.888 [2024-04-24 21:41:31.612339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:08.888 [2024-04-24 21:41:31.612438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:08.888 [2024-04-24 21:41:31.612439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:08.888 [2024-04-24 21:41:31.612665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.612680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.613068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.613460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.613477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.613827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.614224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.614240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.614668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.615052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.615068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.615464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.615785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.615801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.616181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.616608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.888 [2024-04-24 21:41:31.616624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.888 qpair failed and we were unable to recover it. 00:26:08.888 [2024-04-24 21:41:31.616961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.617413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.617429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.617822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.617982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.617997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.618333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.618735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.618752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.619089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.619403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.619419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.619774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.620124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.620141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.620557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.621009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.621026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.621481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.621884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.621901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.622379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.622715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.622732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.623043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.623430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.623448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.623798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.624198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.624215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.624554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.624954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.624970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.625470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.625944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.625961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.626335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.626675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.626693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.627071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.627514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.627532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.627930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.628314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.628332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.628646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.628990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.629007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.629410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.629925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.629945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.630278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.630597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.630615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.631068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.631428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.631446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.631866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.632318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.632338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.632712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.633158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.633179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.633517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.633949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.633966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.634305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.634753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.634771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.635101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.635429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.635445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.635783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.636235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.636252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.636589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.636731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.636747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.637145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.637560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.637580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.638086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.638596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.638615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.889 [2024-04-24 21:41:31.638956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.639279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.889 [2024-04-24 21:41:31.639295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.889 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.639715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.640083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.640101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.640575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.640924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.640939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.641415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.641761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.641774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.642085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.642410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.642422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.642868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.643274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.643286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.643613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.644000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.644012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.644343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.644705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.644719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.645111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.645497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.645510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.645931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.646265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.646277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.646683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.647016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.647029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.647343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.647682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.647696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.648074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.648411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.648427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.648879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.649206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.649218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.649627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.650040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.650052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.650333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.650676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.650689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.651004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.651375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.651387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.651789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.652184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.652197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.652541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.652868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.652881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.653301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.653674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.653687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.654153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.654550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.654562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.654964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.655342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.655354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.655734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.656134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.656146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.656591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.656995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.657010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.657474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.657784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.657798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.658189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.658567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.658582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.658992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.659322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.659336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.659675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.660009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.660024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.660360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.660684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.660700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.890 qpair failed and we were unable to recover it. 00:26:08.890 [2024-04-24 21:41:31.661083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.661473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.890 [2024-04-24 21:41:31.661489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.661941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.662335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.662349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.662736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.663049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.663063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.663414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.663759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.663775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.664222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.664666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.664682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.665085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.665470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.665486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.665821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.666139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.666153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.666480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.666813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.666828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.667216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.667595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.667609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.668014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.668388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.668400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.668791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.669114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.669127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.669554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.669880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.669894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.670220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.670542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.670555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.670961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.671402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.671416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.671868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.672205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.672218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.672539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.672960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.672973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.673301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.673729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.673742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.674063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.674438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.674458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.674857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.675168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.675182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.675625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.676014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.676027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.676472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.676903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.676915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.677220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.677544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.677556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.677967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.678355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.678368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.678680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.679056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.679071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.679541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.679884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.679897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.680310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.680771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.680784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.681098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.681433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.681446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.681789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.682172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.682185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.682578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.682965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.682978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.891 qpair failed and we were unable to recover it. 00:26:08.891 [2024-04-24 21:41:31.683386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.891 [2024-04-24 21:41:31.683722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.683735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.684193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.684521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.684534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.684937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.685284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.685297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.685734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.686053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.686066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.686548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.686699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.686714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.687102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.687549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.687562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.687887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.688335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.688348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.688668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.689157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.689170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.689580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.689967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.689980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.690378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.690702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.690715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.690961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.691265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.691278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.691672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.692048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.692060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.692524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.692899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.692912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.693218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.693560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.693573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.693979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.694363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.694378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.694534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.694935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.694948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.695314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.695638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.695651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.696046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.696243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.696256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.696630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.696957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.696970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.697365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.697768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.697781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.698158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.698496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.698509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.698783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.699103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.699117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.699536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.699919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.699932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.700396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.700856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.700870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.892 [2024-04-24 21:41:31.701328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.701713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.892 [2024-04-24 21:41:31.701728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.892 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.702049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.702422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.702435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.702859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.703322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.703335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.703755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.704171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.704184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.704496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.704976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.704989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.705406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.705799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.705812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.706124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.706602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.706615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.707002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.707388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.707400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.707597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.707940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.707953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.708445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.708611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.708624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.709094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.709560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.709573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.709913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.710307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.710320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.710713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.711122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.711135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.711464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.711845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.711857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.712022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.712402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.712415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.712791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.713160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.713174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.713551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.713936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.713949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.714171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.714567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.714580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.714961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.715288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.715301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.715710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.716090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.716103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.716473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.716780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.716793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.717141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.717458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.717471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.717867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.718198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.718210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.718656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.719125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.719137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.719515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.719882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.719894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.720218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.720660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.720673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.720997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.721372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.721385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.721784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.722198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.722212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.722675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.723049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.723062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.893 qpair failed and we were unable to recover it. 00:26:08.893 [2024-04-24 21:41:31.723439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.893 [2024-04-24 21:41:31.723884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.723897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.724204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.724643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.724656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.725100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.725425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.725438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.725829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.726316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.726329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.726653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.727092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.727105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.727483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.727948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.727961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.728356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.728809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.728821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.729294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.729735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.729748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.730195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.730523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.730536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.730911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.731303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.731316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.731536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.732016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.732029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.732368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.732807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.732820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.733193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.733566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.733579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.733956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.734352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.734365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.734755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.735220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.735233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.735612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.736016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.736029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.736411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.736875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.736888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.737261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.737654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.737667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.738134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.738520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.738533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.739024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.739341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.739353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.739748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.740068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.740081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.740466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.740855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.740868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.741317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.741782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.741795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.742144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.742528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.742541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.743029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.743473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.743486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.743918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.744291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.744304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.744694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.744845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.744858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.745244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.745729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.745742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.894 [2024-04-24 21:41:31.746208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.746680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.894 [2024-04-24 21:41:31.746693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.894 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.747139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.747513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.747526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.747935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.748327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.748340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.748742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.749057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.749070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.749535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.749928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.749941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.750381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.750763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.750776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.751222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.751634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.751647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.752115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.752273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.752286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.752729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.753120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.753133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.753546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.754012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.754025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.754444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.754865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.754878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.755322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.755650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.755662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.756130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.756585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.756598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.757001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.757440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.757459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.757880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.758323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.758336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.758714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.759124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.759141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.759593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.760057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.760072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.760466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.760862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.760875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.761270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.761491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.761504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.761905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.762374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.762387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:08.895 [2024-04-24 21:41:31.762846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.763214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.895 [2024-04-24 21:41:31.763229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:08.895 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.763676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.764014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.764036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.764426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.764858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.764871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.765247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.765639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.765652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.765854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.766286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.766307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.766823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.767280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.767297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.767625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.768034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.768051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.768473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.768891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.768908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.769385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.769857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.769875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.770283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.770678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.770697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.771178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.771574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.771592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.772069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.772455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.772472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.772783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.773233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.773250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.773726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.774178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.774195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.774587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.774967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.774984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.775462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.775811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.775828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.776281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.776729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.776747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.777227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.777702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.777719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.778117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.778537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.778554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.779030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.779500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.779518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.779973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.780358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.780375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.780781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.781229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.781247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.781729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.782205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.782222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.782617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.783068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.783085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.783488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.783987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.784006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.784411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.784889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.784906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.785375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.785800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.785817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.786205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.786654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.786671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.787072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.787467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.787486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.787870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.788320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.788337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.788741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.789136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.789153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.789610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.790059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.790077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.790522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.790998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.791015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.791322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.791738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.791755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.792210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.792658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.792676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.793156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.793630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.793647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.794126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.794575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.794592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.794970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.795379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.795396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.795794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.796215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.796232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.796708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.797107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.797124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.797602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.798004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.798021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.798489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.798641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.798658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.799056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.799439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.799460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.799930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.800378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.800395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.800873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.801274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.801291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.801709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.802186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.158 [2024-04-24 21:41:31.802203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.158 qpair failed and we were unable to recover it. 00:26:09.158 [2024-04-24 21:41:31.802614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.802956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.802973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.803426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.803834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.803851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.804253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.804727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.804745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.805132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.805522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.805540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.805937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.806332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.806350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.806810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.807206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.807223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.807608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.808030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.808047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.808520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.808970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.808987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.809438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.809837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.809854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.810335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.810715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.810733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.811188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.811573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.811590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.812044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.812516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.812533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.812936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.813414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.813431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.813902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.814295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.814312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.814766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.814922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.814939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.815368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.815825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.815842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.816326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.816653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.816670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.817138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.817610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.817627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.818023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.818517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.818534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.818936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.819390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.819408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.819757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.820143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.820160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.820632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.821083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.821100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.821521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.821917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.821934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.822332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.822716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.822733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.823211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.823608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.823625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.824002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.824476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.824493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.824878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.825279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.825296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.825636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.826125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.826143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.826593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.827012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.827029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.827436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.827855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.827875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.828264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.828645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.828663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.829077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.829527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.829544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.830000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.830466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.830483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.830859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.831336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.831354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.831829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.832222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.832240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.832639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.833024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.833041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.833426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.833834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.833851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.834330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.834824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.834841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.835186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.835662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.835679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.836061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.836561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.836581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.837071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.837414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.837430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.837911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.838312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.838329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.838730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.839182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.839199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.839697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.839925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.839942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.840362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.840796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.840813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.841244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.841694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.841711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.842178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.842626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.842644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.843068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.843521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.843537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.843925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.844321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.844338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.844787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.844992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.845009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.845433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.845752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.845769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.845977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.846370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.846386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.846758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.847213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.847231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.847621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.848015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.848032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.848433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.848908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.159 [2024-04-24 21:41:31.848925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.159 qpair failed and we were unable to recover it. 00:26:09.159 [2024-04-24 21:41:31.849398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.849808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.849825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.850223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.850695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.850713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.851103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.851519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.851537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.852031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.852506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.852524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.853007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.853388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.853405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.853810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.854211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.854228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.854706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.855107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.855124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.855605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.855932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.855948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.856422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.856663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.856680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.857132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.857544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.857561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.858015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.858412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.858429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.858908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.859289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.859306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.859709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.859938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.859955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.860348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.860726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.860743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.861211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.861639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.861657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.862133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.862546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.862564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.862942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.863342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.863358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.863835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.864307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.864324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.864800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.865180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.865198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.865694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.866176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.866194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.866692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.867098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.867116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.867523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.867997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.868014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.868487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.868908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.868926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.869401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.869721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.869738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.870048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.870459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.870475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.870886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.871365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.871382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.871793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.872271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.872288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.872673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.873171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.873189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.873657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.874105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.874123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.874525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.874752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.874770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.875280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.875754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.875772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.876220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.876670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.876688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.877129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.877540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.877557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.878040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.878437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.878458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.878670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.879121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.879139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.879601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.880093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.880113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.880507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.880725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.880742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.881204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.881653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.881671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.882147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.882531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.882549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.883001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.883474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.883491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.883968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.884443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.884465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.884868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.885255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.885271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.885674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.886072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.886090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.886499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.886881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.886899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.887294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.887698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.887715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.888185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.888636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.888653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.889129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.889589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.889606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.160 qpair failed and we were unable to recover it. 00:26:09.160 [2024-04-24 21:41:31.890081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.160 [2024-04-24 21:41:31.890415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.890433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.890910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.891364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.891381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.891799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.892204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.892221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.892455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.892878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.892895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.893348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.893700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.893717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.894097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.894523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.894541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.894751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.895241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.895258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.895734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.896121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.896138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.896550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.896999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.897016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.897476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.897802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.897819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.898289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.898611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.898628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.898940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.899371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.899388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.899810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.900308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.900325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.900744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.901214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.901232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.901654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.902124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.902141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.902479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.902896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.902913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.903254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.903646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.903663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.904070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.904420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.904437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.904794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.905220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.905237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.905690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.906142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.906160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.906538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.906919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.906936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.907390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.907741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.907759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.908211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.908661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.908678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.909088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.909482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.909500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.909904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.910286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.910303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.910779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.911117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.911134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.911559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.912014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.912031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.912423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.912873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.912891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.913292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.913632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.913649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.914002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.914478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.914495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.914900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.915296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.915312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.915707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.916123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.916140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.916535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.917011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.917029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.917458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.917868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.917884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.918214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.918627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.918644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.918812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.919212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.919229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.919640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.920032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.920048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.920394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.920846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.920863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.921193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.921531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.921548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.921959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.922289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.922308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.922710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.923155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.923172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.923583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.923968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.923984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.924393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.924807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.924824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.925288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.925774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.925791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.926185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.926572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.926589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.927043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.927443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.927464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.927939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.928390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.928407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.928829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.929280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.929297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.929701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.930216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.930233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.930591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.931088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.931105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.931448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.931906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.931923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.932355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.932696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.932713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.933098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.933501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.933517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.933934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.934384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.934400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.934725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.935111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.935128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.161 qpair failed and we were unable to recover it. 00:26:09.161 [2024-04-24 21:41:31.935601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.161 [2024-04-24 21:41:31.935935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.935951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.936440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.936847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.936864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.937315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.937719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.937736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.938120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.938582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.938599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.938988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.939311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.939328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.939720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.940135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.940152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.940581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.940988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.941005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.941360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.941830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.941847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.942321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.942673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.942690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.943145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.943558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.943576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.943962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.944439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.944459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.944938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.945366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.945383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.945816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.946123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.946141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.946618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.947095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.947112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.947471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.947873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.947889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.948310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.948713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.948730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.948885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.949360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.949377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.949611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.950008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.950025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.950504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.950961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.950978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.951376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.951778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.951795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.952244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.952659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.952676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.953178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.953595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.953612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.954086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.954470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.954487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.954886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.955282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.955299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.955724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.956160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.956177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.956510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.956963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.956980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.957342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.957699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.957716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.958117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.958518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.958535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.958873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.959279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.959297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.959685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.960151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.960168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.960522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.960972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.960989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.961420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.961831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.961849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.962314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.962791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.962808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.963307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.963704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.963723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.964199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.964429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.964446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.964777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.965240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.965259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.965683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.966087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.966104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.966437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.966924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.966941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.967330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.967658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.967676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.968080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.968529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.968548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.968943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.969415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.969432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.969890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.970339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.970356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.970828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.971273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.971289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.971649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.972101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.972118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.972567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.972939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.972957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.973434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.973824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.973841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.974187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.974334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.974350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.974696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.975100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.975116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.975525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.975978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.975996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.976416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.976897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.976915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.977224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.977671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.977688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.978065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.978476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.978493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.978969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.979361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.979378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.979795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.980212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.980229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.162 qpair failed and we were unable to recover it. 00:26:09.162 [2024-04-24 21:41:31.980730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.981229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.162 [2024-04-24 21:41:31.981246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.981674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.982144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.982161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.982571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.982965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.982982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.983226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.983610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.983627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.984030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.984426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.984443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.984857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.985241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.985258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.985671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.986130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.986147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.986628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.986831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.986849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.987344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.987692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.987710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.988133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.988543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.988561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.988971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.989442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.989462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.989930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.990399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.990416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b03730 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.990878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.991309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.991330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.991717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.992170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.992188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.992612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.993009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.993027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.993507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.993909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.993926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.994390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.994722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.994740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.995217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.995688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.995707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.996234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.996687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.996705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.997110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.997560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.997578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.998033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.998500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.998518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.998934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.999383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:31.999400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:31.999810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.000264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.000281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.000734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.001197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.001214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.001693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.002167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.002184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.002658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.003112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.003129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.003533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.003910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.003927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.004401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.004824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.004842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.005191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.005642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.005660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.006137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.006520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.006537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.006957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.007382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.007399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.007808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.008277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.008294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.008636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.009052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.009069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.009526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.010001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.010019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.010497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.010977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.010995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.011512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.011915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.011933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.012410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.012809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.012826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.013238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.013647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.013665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.014143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.014496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.014513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.014989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.015463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.015480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.015935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.016387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.016404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.016821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.017165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.017182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.017655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.018129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.018146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.018619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.019096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.019113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.019589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.020068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.020085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.020296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.020712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.020729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.021183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.021565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.021582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.022082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.022558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.022575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.022918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.023318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.023336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.023544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.024020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.024037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.024278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.024702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.024719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.163 [2024-04-24 21:41:32.025104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.025556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.163 [2024-04-24 21:41:32.025573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.163 qpair failed and we were unable to recover it. 00:26:09.164 [2024-04-24 21:41:32.025954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.026367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.026384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.164 qpair failed and we were unable to recover it. 00:26:09.164 [2024-04-24 21:41:32.026756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.027228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.027246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.164 qpair failed and we were unable to recover it. 00:26:09.164 [2024-04-24 21:41:32.027646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.028046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.028063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.164 qpair failed and we were unable to recover it. 00:26:09.164 [2024-04-24 21:41:32.028393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.028868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.028885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.164 qpair failed and we were unable to recover it. 00:26:09.164 [2024-04-24 21:41:32.029309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.029774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.029791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.164 qpair failed and we were unable to recover it. 00:26:09.164 [2024-04-24 21:41:32.030256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.030729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.030746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.164 qpair failed and we were unable to recover it. 00:26:09.164 [2024-04-24 21:41:32.031222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.031604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.031621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.164 qpair failed and we were unable to recover it. 00:26:09.164 [2024-04-24 21:41:32.032106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.032491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.032508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.164 qpair failed and we were unable to recover it. 00:26:09.164 [2024-04-24 21:41:32.032960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.033429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.033446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.164 qpair failed and we were unable to recover it. 00:26:09.164 [2024-04-24 21:41:32.033935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.034386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.034403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.164 qpair failed and we were unable to recover it. 00:26:09.164 [2024-04-24 21:41:32.034733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.035157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.035174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.164 qpair failed and we were unable to recover it. 00:26:09.164 [2024-04-24 21:41:32.035578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.036029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.036046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.164 qpair failed and we were unable to recover it. 00:26:09.164 [2024-04-24 21:41:32.036437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.036918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.036944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.164 qpair failed and we were unable to recover it. 00:26:09.164 [2024-04-24 21:41:32.037354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.037752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.037769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.164 qpair failed and we were unable to recover it. 00:26:09.164 [2024-04-24 21:41:32.038252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.038737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.038754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.164 qpair failed and we were unable to recover it. 00:26:09.164 [2024-04-24 21:41:32.039154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.039543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.039561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.164 qpair failed and we were unable to recover it. 00:26:09.164 [2024-04-24 21:41:32.040039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.040503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.040522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.164 qpair failed and we were unable to recover it. 00:26:09.164 [2024-04-24 21:41:32.041004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.041396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.164 [2024-04-24 21:41:32.041414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.164 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.041890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.042317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.042335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.042792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.043266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.043283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.043760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.044145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.044166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.044528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.045005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.045022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.045496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.045970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.045987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.046396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.046879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.046898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.047357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.047757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.047774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.048179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.048522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.048540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.049028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.049440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.049462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.049942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.050413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.050430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.050722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.051198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.051215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.051550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.052002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.052019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.052473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.052684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.052706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.053186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.053398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.053415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.053881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.054374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.054391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.054744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.055255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.055272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.055747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.056085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.056103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.056557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.056977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.056994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.057469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.057803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.057821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.058209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.058680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.058697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.059177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.059650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.059667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.060070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.060481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.060499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.060892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.061238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.061258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.061668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.062071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.062088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.062491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.062966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.062983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.063393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.063745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.063762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.064215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.064616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.064633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.065084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.065537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.065554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.066010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.066470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.066488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.066892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.067294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.067311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.067698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.068112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.068129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.068552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.068934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.068951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.069377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.069775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.069796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.070177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.070572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.070590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.071000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.071462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.071481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.428 qpair failed and we were unable to recover it. 00:26:09.428 [2024-04-24 21:41:32.071883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.428 [2024-04-24 21:41:32.072382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.072399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.072800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.073253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.073270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.073698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.074123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.074140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.074643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.075041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.075059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.075504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.075954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.075971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.076351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.076731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.076749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.077226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.077635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.077653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.078119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.078524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.078541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.079021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.079355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.079372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.079848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.080225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.080243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.080652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.080816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.080834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.081239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.081697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.081716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.082101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.082318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.082335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.082728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.083130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.083147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.083607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.084107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.084124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.084547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.084929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.084946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.085312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.085723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.085740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.086128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.086524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.086542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.086946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.087266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.087283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.087758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.088210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.088227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.088707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.089182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.089199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.089721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.089951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.089968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.090445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.090848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.090865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.091295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.091792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.091809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.092200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.092578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.092595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.093005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.093391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.093408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.093616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.094000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.094021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.094418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.094778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.094795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.094981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.095316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.095334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.095650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.096101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.096118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.096499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.096950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.096967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.097471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.097864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.097881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.098343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.098702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.098720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.099195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.099354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.099371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.099766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.100198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.100215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.100695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.101100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.101117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.101523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.101991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.102007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.102406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.102781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.102797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.103247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.103718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.103735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.103975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.104394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.104411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.104840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.105222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.105239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.105653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.106055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.106073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.106551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.107011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.107028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.107443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.107831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.107848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.108186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.108589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.108607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.109008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.109361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.109377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.109789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.110206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.110223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.110623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.110932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.429 [2024-04-24 21:41:32.110948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.429 qpair failed and we were unable to recover it. 00:26:09.429 [2024-04-24 21:41:32.111403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.111862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.111879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.112353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.112644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.112661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.113135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.113586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.113604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.114059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.114465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.114483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.114966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.115311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.115328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.115716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.116113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.116129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.116528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.116976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.116993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.117400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.117796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.117813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.118168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.118489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.118506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.118848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.119274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.119291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.119769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.120221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.120238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.120715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.121193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.121210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.121690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.121962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.121979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.122372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.122775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.122792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.123194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.123644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.123661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.124116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.124578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.124595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.124980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.125458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.125475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.125889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.126341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.126358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.126810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.127273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.127290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.127766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.128109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.128126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.128603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.128997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.129014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.129493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.129944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.129961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.130378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.130876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.130893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.131366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.131840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.131857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.132336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.132865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.132883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.133290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.133763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.133780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.134166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.134573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.134590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.134918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.135310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.135327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.135806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.136204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.136221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.136626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.137076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.137093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.137579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.138035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.138052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.138435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.138842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.138859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.139238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.139722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.139740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.140151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.140564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.140582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.141055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.141531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.141548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.142002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.142401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.142418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.142874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.143252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.143269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.143768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.144215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.144232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.144620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.145029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.145046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.145446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.145924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.145941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.146340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.146824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.146841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.147231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.147642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.147659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.148132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.148582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.148600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.149081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.149532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.149549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.430 qpair failed and we were unable to recover it. 00:26:09.430 [2024-04-24 21:41:32.149962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.430 [2024-04-24 21:41:32.150359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.150376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.150851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.151245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.151262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.151608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.151989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.152006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.152411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.152862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.152879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.153357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.153780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.153797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.154199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.154532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.154549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.154921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.155393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.155410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.155775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.156226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.156243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.156629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.156953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.156970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.157456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.157923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.157941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.158416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.158839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.158857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.159244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.159693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.159711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.160101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.160551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.160569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.160969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.161371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.161388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.161801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.162254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.162271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.162733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.163128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.163145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.163626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.164028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.164044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.164435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.164761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.164778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.165172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.165624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.165641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.166095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.166498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.166516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.166969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.167469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.167486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.167880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.168190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.168207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.168660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.169141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.169159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.169591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.169982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.169998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.170333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.170746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.170763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.171144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.171630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.171647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.171793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.172103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.172119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.172520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.172924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.172941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.173417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.173912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.173929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.174260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.174733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.174750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.175160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.175498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.175515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.175918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.176321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.176337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.176809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.177136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.177153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.177553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.177933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.177949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.178289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.178675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.178692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.179080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.179474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.179491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.179967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.180414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.180434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.180859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.181261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.181278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.181657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.182057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.182074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.182528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.182910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.182927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.183259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.183742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.183759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.184153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.184552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.184570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.184955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.185428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.185445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.185784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.186102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.186119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.431 qpair failed and we were unable to recover it. 00:26:09.431 [2024-04-24 21:41:32.186593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.186992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.431 [2024-04-24 21:41:32.187008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.187485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.187877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.187894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.188269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.188666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.188687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.189139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.189554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.189572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.189994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.190468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.190485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.190875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.191324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.191341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.191823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.192276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.192293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.192625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.192857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.192874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.193269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.193724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.193741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.194194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.194511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.194529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.194986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.195441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.195462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.195851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.196234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.196252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.196709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.197183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.197203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.197588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.197984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.198001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.198390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.198846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.198864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.199288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.199646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.199663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.200042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.200468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.200486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.200937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.201336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.201353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.201745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.202143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.202160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.202585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.203058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.203075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.203533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.203915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.203932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.204434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.204844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.204861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.205333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.205755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.205777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.205990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.206393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.206410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.206911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.207230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.207246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.207724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.207956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.207973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.208351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.208753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.208770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.209097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.209585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.209602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.210078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.210532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.210549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.211024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.211410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.211427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.211854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.212332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.212349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.212761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.213162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.213179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.213582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.213998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.214015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.214228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.214685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.214702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.215115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.215564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.215583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.216070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.216521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.216538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.216918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.217392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.217409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.217793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.218181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.218199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.218677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.219153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.219171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.219571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.219993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.220010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.220433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.220888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.220905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.221290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.221696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.221714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.222092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.222495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.222513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.222933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.223381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.223398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.223861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.224250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.224267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.224764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.225154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.432 [2024-04-24 21:41:32.225171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.432 qpair failed and we were unable to recover it. 00:26:09.432 [2024-04-24 21:41:32.225583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.226054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.226071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.226549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.226933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.226951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.227342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.227803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.227821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.228225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.228681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.228699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.229195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.229591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.229609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.230066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.230467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.230485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.230902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.231404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.231421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.231946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.232442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.232462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.232747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.233198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.233215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.233613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.234064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.234080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.234370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.234770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.234787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.235263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.235608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.235625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.236086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.236414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.236432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.236911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.237312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.237329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.237784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.238235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.238252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.238757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.238964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.238981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.239378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.239709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.239727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.240145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.240433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.240454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.240937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.241388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.241405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.241784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.242236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.242253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.242593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.243091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.243108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.243500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.243966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.243983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.244481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.244933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.244949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.245403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.245878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.245895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.246386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.246784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.246802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.247276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.247741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.247758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.248175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.248611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.248628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.249098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.249570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.249587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.250062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.250487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.250504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.250705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.251101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.251117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.251592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.251903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.251920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.252338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.252812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.252829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.253316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.253815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.253832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.254245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.254660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.254677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.255063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.255407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.255424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.255828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 21:41:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:09.433 21:41:32 -- common/autotest_common.sh@850 -- # return 0 00:26:09.433 [2024-04-24 21:41:32.256303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.256321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 21:41:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:09.433 [2024-04-24 21:41:32.256707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 21:41:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:09.433 [2024-04-24 21:41:32.256916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.256936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 21:41:32 -- common/autotest_common.sh@10 -- # set +x 00:26:09.433 [2024-04-24 21:41:32.257330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.257727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.257747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.258214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.258501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.258517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.258903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.259298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.259313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.259720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.260193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.260209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.260707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.261180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.261195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.433 qpair failed and we were unable to recover it. 00:26:09.433 [2024-04-24 21:41:32.261505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.433 [2024-04-24 21:41:32.261954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.261969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.262438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.262738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.262754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.263207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.263704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.263720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.264178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.264607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.264623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.265049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.265550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.265566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.266085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.266541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.266557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.266976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.267458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.267474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.267902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.268362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.268378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.268780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.269239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.269255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.269653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.270144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.270160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.270569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.270989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.271005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.271435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.271920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.271935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.272413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.272800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.272816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.273216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.273687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.273704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.274114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.274498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.274516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.274788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.275238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.275254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.275700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.276149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.276165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.276645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.277067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.277083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.277580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.277926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.277942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.278392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.278812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.278828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.279223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.279684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.279701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.280130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.280551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.280567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.281059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.281410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.281426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.281873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.282338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.282354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.282798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.283146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.283165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.283641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.284083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.284100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.284570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.284972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.284990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.285511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.285862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.285879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.286376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.286773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.286790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.287266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.287774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.287792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.288198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.288660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.288679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.289094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.289474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.289492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.289949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.290345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.290362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.290842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.291249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.291266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.291660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.292060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.292080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.292537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.292990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.293008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.293453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.293929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.293948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.294401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.294790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.294808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.295258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.295727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.295745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.296143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.296600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.296617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.297073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.297495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.297513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.297948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.298460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.298477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 21:41:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.434 [2024-04-24 21:41:32.298832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 21:41:32 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:09.434 [2024-04-24 21:41:32.299287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.299306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 21:41:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 21:41:32 -- common/autotest_common.sh@10 -- # set +x 00:26:09.434 [2024-04-24 21:41:32.299795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.300202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.300220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.300619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.300977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.300994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.301397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.301849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.301867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.302321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.302831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.302848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.434 [2024-04-24 21:41:32.303184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.303648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.434 [2024-04-24 21:41:32.303666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.434 qpair failed and we were unable to recover it. 00:26:09.435 [2024-04-24 21:41:32.304167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.435 [2024-04-24 21:41:32.304646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.435 [2024-04-24 21:41:32.304675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.435 qpair failed and we were unable to recover it. 00:26:09.435 [2024-04-24 21:41:32.305174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.435 [2024-04-24 21:41:32.305675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.435 [2024-04-24 21:41:32.305693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.435 qpair failed and we were unable to recover it. 00:26:09.435 [2024-04-24 21:41:32.306037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.435 [2024-04-24 21:41:32.306538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.435 [2024-04-24 21:41:32.306555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.435 qpair failed and we were unable to recover it. 00:26:09.435 [2024-04-24 21:41:32.307031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.435 [2024-04-24 21:41:32.307514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.435 [2024-04-24 21:41:32.307533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.435 qpair failed and we were unable to recover it. 00:26:09.435 [2024-04-24 21:41:32.307961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.435 [2024-04-24 21:41:32.308473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.435 [2024-04-24 21:41:32.308493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.435 qpair failed and we were unable to recover it. 00:26:09.435 [2024-04-24 21:41:32.308819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.309236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.309255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-04-24 21:41:32.309647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.310063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.310081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-04-24 21:41:32.310541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.310946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.310964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-04-24 21:41:32.311424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.311816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.311834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-04-24 21:41:32.312316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.312696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.312715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-04-24 21:41:32.313192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.313692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.313711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-04-24 21:41:32.314232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.314733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.314750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-04-24 21:41:32.315271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.315672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.315691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-04-24 21:41:32.316023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.316454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.316472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-04-24 21:41:32.316950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 Malloc0 00:26:09.695 [2024-04-24 21:41:32.317410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.317428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 21:41:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:09.695 [2024-04-24 21:41:32.317867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 21:41:32 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:09.695 [2024-04-24 21:41:32.318322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.318342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 21:41:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:09.695 21:41:32 -- common/autotest_common.sh@10 -- # set +x 00:26:09.695 [2024-04-24 21:41:32.318809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.319226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.319243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-04-24 21:41:32.319721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.320192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.320209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-04-24 21:41:32.320635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.321109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.321126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-04-24 21:41:32.321589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.321998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.322016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-04-24 21:41:32.322480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.322857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.322874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-04-24 21:41:32.323330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.323728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.323746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-04-24 21:41:32.324197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.324614] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.695 [2024-04-24 21:41:32.324681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.324698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.695 qpair failed and we were unable to recover it. 00:26:09.695 [2024-04-24 21:41:32.325182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.325683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.695 [2024-04-24 21:41:32.325700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.326174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.326573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.326591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.326626] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b11260 (9): Bad file descriptor 00:26:09.696 [2024-04-24 21:41:32.327150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.327547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.327565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.328039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.328549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.328562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.329043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.329527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.329540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.330017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.330530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.330544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.331012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.331428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.331441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.331941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.332402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.332415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.332795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.333238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.333251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 21:41:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:09.696 [2024-04-24 21:41:32.333693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 21:41:32 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:09.696 21:41:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:09.696 [2024-04-24 21:41:32.334136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.334150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 21:41:32 -- common/autotest_common.sh@10 -- # set +x 00:26:09.696 [2024-04-24 21:41:32.334595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.335006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.335019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b84000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.335520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.336021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.336038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.336511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.336977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.336995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.337468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.337943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.337961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.338467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.338820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.338837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.339234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.339706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.339724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.340128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.340603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.340620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.340998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 21:41:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:09.696 [2024-04-24 21:41:32.341392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.341409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 21:41:32 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:09.696 [2024-04-24 21:41:32.341758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 21:41:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:09.696 [2024-04-24 21:41:32.342142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.342160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 21:41:32 -- common/autotest_common.sh@10 -- # set +x 00:26:09.696 [2024-04-24 21:41:32.342552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.343026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.343043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.343533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.344038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.344055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.344531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.345015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.345032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.345535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.345986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.346003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.346505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.346955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.346973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.347425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.347885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.347902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.348281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.348730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.348747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.696 [2024-04-24 21:41:32.349239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 21:41:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:09.696 [2024-04-24 21:41:32.349660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.696 [2024-04-24 21:41:32.349678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.696 qpair failed and we were unable to recover it. 00:26:09.697 21:41:32 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:09.697 21:41:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:09.697 [2024-04-24 21:41:32.350084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 21:41:32 -- common/autotest_common.sh@10 -- # set +x 00:26:09.697 [2024-04-24 21:41:32.350559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-04-24 21:41:32.350577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-04-24 21:41:32.351037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-04-24 21:41:32.351512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-04-24 21:41:32.351530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-04-24 21:41:32.351946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-04-24 21:41:32.352410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-04-24 21:41:32.352427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1b8c000b90 with addr=10.0.0.2, port=4420 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-04-24 21:41:32.352847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.697 [2024-04-24 21:41:32.352876] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.697 [2024-04-24 21:41:32.356098] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:26:09.697 [2024-04-24 21:41:32.356146] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f1b8c000b90 (107): Transport endpoint is not connected 00:26:09.697 [2024-04-24 21:41:32.356200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 21:41:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:09.697 21:41:32 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:09.697 21:41:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:09.697 21:41:32 -- common/autotest_common.sh@10 -- # set +x 00:26:09.697 21:41:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:09.697 [2024-04-24 21:41:32.365443] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.697 [2024-04-24 21:41:32.365649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.697 [2024-04-24 21:41:32.365672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.697 [2024-04-24 21:41:32.365684] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.697 [2024-04-24 21:41:32.365694] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.697 [2024-04-24 21:41:32.365716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 21:41:32 -- host/target_disconnect.sh@58 -- # wait 3003470 00:26:09.697 [2024-04-24 21:41:32.375225] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.697 [2024-04-24 21:41:32.375358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.697 [2024-04-24 21:41:32.375378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.697 [2024-04-24 21:41:32.375389] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.697 [2024-04-24 21:41:32.375398] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.697 [2024-04-24 21:41:32.375419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-04-24 21:41:32.385216] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.697 [2024-04-24 21:41:32.385346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.697 [2024-04-24 21:41:32.385365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.697 [2024-04-24 21:41:32.385375] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.697 [2024-04-24 21:41:32.385384] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.697 [2024-04-24 21:41:32.385404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-04-24 21:41:32.395209] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.697 [2024-04-24 21:41:32.395346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.697 [2024-04-24 21:41:32.395365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.697 [2024-04-24 21:41:32.395375] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.697 [2024-04-24 21:41:32.395383] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.697 [2024-04-24 21:41:32.395403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-04-24 21:41:32.405219] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.697 [2024-04-24 21:41:32.405346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.697 [2024-04-24 21:41:32.405364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.697 [2024-04-24 21:41:32.405374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.697 [2024-04-24 21:41:32.405383] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.697 [2024-04-24 21:41:32.405403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-04-24 21:41:32.415191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.697 [2024-04-24 21:41:32.415313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.697 [2024-04-24 21:41:32.415333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.697 [2024-04-24 21:41:32.415343] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.697 [2024-04-24 21:41:32.415351] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.697 [2024-04-24 21:41:32.415372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-04-24 21:41:32.425265] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.697 [2024-04-24 21:41:32.425393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.697 [2024-04-24 21:41:32.425411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.697 [2024-04-24 21:41:32.425421] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.697 [2024-04-24 21:41:32.425430] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.697 [2024-04-24 21:41:32.425454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-04-24 21:41:32.435293] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.697 [2024-04-24 21:41:32.435437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.697 [2024-04-24 21:41:32.435460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.697 [2024-04-24 21:41:32.435473] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.697 [2024-04-24 21:41:32.435482] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.697 [2024-04-24 21:41:32.435502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-04-24 21:41:32.445300] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.697 [2024-04-24 21:41:32.445426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.697 [2024-04-24 21:41:32.445445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.697 [2024-04-24 21:41:32.445460] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.697 [2024-04-24 21:41:32.445468] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.697 [2024-04-24 21:41:32.445488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-04-24 21:41:32.455336] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.697 [2024-04-24 21:41:32.455485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.697 [2024-04-24 21:41:32.455504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.697 [2024-04-24 21:41:32.455514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.697 [2024-04-24 21:41:32.455523] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.697 [2024-04-24 21:41:32.455543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.697 qpair failed and we were unable to recover it. 00:26:09.697 [2024-04-24 21:41:32.465337] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.698 [2024-04-24 21:41:32.465482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.698 [2024-04-24 21:41:32.465501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.698 [2024-04-24 21:41:32.465510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.698 [2024-04-24 21:41:32.465519] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.698 [2024-04-24 21:41:32.465539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-04-24 21:41:32.475379] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.698 [2024-04-24 21:41:32.475506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.698 [2024-04-24 21:41:32.475526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.698 [2024-04-24 21:41:32.475536] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.698 [2024-04-24 21:41:32.475545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.698 [2024-04-24 21:41:32.475564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-04-24 21:41:32.485445] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.698 [2024-04-24 21:41:32.485570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.698 [2024-04-24 21:41:32.485588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.698 [2024-04-24 21:41:32.485598] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.698 [2024-04-24 21:41:32.485607] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.698 [2024-04-24 21:41:32.485627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-04-24 21:41:32.495471] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.698 [2024-04-24 21:41:32.495594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.698 [2024-04-24 21:41:32.495612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.698 [2024-04-24 21:41:32.495622] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.698 [2024-04-24 21:41:32.495631] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.698 [2024-04-24 21:41:32.495651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-04-24 21:41:32.505481] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.698 [2024-04-24 21:41:32.505610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.698 [2024-04-24 21:41:32.505629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.698 [2024-04-24 21:41:32.505639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.698 [2024-04-24 21:41:32.505648] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.698 [2024-04-24 21:41:32.505667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-04-24 21:41:32.515504] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.698 [2024-04-24 21:41:32.515637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.698 [2024-04-24 21:41:32.515655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.698 [2024-04-24 21:41:32.515665] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.698 [2024-04-24 21:41:32.515674] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.698 [2024-04-24 21:41:32.515693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-04-24 21:41:32.525607] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.698 [2024-04-24 21:41:32.525744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.698 [2024-04-24 21:41:32.525763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.698 [2024-04-24 21:41:32.525776] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.698 [2024-04-24 21:41:32.525785] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.698 [2024-04-24 21:41:32.525804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-04-24 21:41:32.535578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.698 [2024-04-24 21:41:32.535703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.698 [2024-04-24 21:41:32.535722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.698 [2024-04-24 21:41:32.535732] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.698 [2024-04-24 21:41:32.535741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.698 [2024-04-24 21:41:32.535761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-04-24 21:41:32.545596] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.698 [2024-04-24 21:41:32.545724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.698 [2024-04-24 21:41:32.545744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.698 [2024-04-24 21:41:32.545753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.698 [2024-04-24 21:41:32.545762] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.698 [2024-04-24 21:41:32.545781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-04-24 21:41:32.555641] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.698 [2024-04-24 21:41:32.555787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.698 [2024-04-24 21:41:32.555807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.698 [2024-04-24 21:41:32.555818] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.698 [2024-04-24 21:41:32.555827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.698 [2024-04-24 21:41:32.555846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-04-24 21:41:32.565671] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.698 [2024-04-24 21:41:32.565795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.698 [2024-04-24 21:41:32.565813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.698 [2024-04-24 21:41:32.565823] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.698 [2024-04-24 21:41:32.565831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.698 [2024-04-24 21:41:32.565851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.698 [2024-04-24 21:41:32.575746] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.698 [2024-04-24 21:41:32.575890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.698 [2024-04-24 21:41:32.575910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.698 [2024-04-24 21:41:32.575920] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.698 [2024-04-24 21:41:32.575930] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.698 [2024-04-24 21:41:32.575950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.698 qpair failed and we were unable to recover it. 00:26:09.958 [2024-04-24 21:41:32.585717] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.958 [2024-04-24 21:41:32.585843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.958 [2024-04-24 21:41:32.585864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.958 [2024-04-24 21:41:32.585875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.958 [2024-04-24 21:41:32.585883] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.958 [2024-04-24 21:41:32.585904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.958 qpair failed and we were unable to recover it. 00:26:09.958 [2024-04-24 21:41:32.595752] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.958 [2024-04-24 21:41:32.595881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.958 [2024-04-24 21:41:32.595902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.958 [2024-04-24 21:41:32.595912] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.958 [2024-04-24 21:41:32.595921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.959 [2024-04-24 21:41:32.595941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.959 qpair failed and we were unable to recover it. 00:26:09.959 [2024-04-24 21:41:32.605711] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.959 [2024-04-24 21:41:32.605835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.959 [2024-04-24 21:41:32.605854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.959 [2024-04-24 21:41:32.605864] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.959 [2024-04-24 21:41:32.605872] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.959 [2024-04-24 21:41:32.605892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.959 qpair failed and we were unable to recover it. 00:26:09.959 [2024-04-24 21:41:32.616006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.959 [2024-04-24 21:41:32.616133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.959 [2024-04-24 21:41:32.616155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.959 [2024-04-24 21:41:32.616165] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.959 [2024-04-24 21:41:32.616174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.959 [2024-04-24 21:41:32.616193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.959 qpair failed and we were unable to recover it. 00:26:09.959 [2024-04-24 21:41:32.625938] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.959 [2024-04-24 21:41:32.626082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.959 [2024-04-24 21:41:32.626101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.959 [2024-04-24 21:41:32.626111] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.959 [2024-04-24 21:41:32.626119] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.959 [2024-04-24 21:41:32.626139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.959 qpair failed and we were unable to recover it. 00:26:09.959 [2024-04-24 21:41:32.635916] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.959 [2024-04-24 21:41:32.636036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.959 [2024-04-24 21:41:32.636054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.959 [2024-04-24 21:41:32.636064] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.959 [2024-04-24 21:41:32.636073] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.959 [2024-04-24 21:41:32.636092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.959 qpair failed and we were unable to recover it. 00:26:09.959 [2024-04-24 21:41:32.645939] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.959 [2024-04-24 21:41:32.646062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.959 [2024-04-24 21:41:32.646080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.959 [2024-04-24 21:41:32.646090] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.959 [2024-04-24 21:41:32.646099] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.959 [2024-04-24 21:41:32.646118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.959 qpair failed and we were unable to recover it. 00:26:09.959 [2024-04-24 21:41:32.655973] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.959 [2024-04-24 21:41:32.656094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.959 [2024-04-24 21:41:32.656113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.959 [2024-04-24 21:41:32.656123] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.959 [2024-04-24 21:41:32.656132] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.959 [2024-04-24 21:41:32.656154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.959 qpair failed and we were unable to recover it. 00:26:09.959 [2024-04-24 21:41:32.665934] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.959 [2024-04-24 21:41:32.666059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.959 [2024-04-24 21:41:32.666078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.959 [2024-04-24 21:41:32.666088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.959 [2024-04-24 21:41:32.666097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.959 [2024-04-24 21:41:32.666116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.959 qpair failed and we were unable to recover it. 00:26:09.959 [2024-04-24 21:41:32.675979] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.959 [2024-04-24 21:41:32.676103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.959 [2024-04-24 21:41:32.676122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.959 [2024-04-24 21:41:32.676132] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.959 [2024-04-24 21:41:32.676140] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.959 [2024-04-24 21:41:32.676160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.959 qpair failed and we were unable to recover it. 00:26:09.959 [2024-04-24 21:41:32.686209] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.959 [2024-04-24 21:41:32.686338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.959 [2024-04-24 21:41:32.686356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.959 [2024-04-24 21:41:32.686366] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.959 [2024-04-24 21:41:32.686375] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.959 [2024-04-24 21:41:32.686394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.959 qpair failed and we were unable to recover it. 00:26:09.959 [2024-04-24 21:41:32.695958] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.959 [2024-04-24 21:41:32.696086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.959 [2024-04-24 21:41:32.696104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.959 [2024-04-24 21:41:32.696114] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.959 [2024-04-24 21:41:32.696123] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.959 [2024-04-24 21:41:32.696142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.959 qpair failed and we were unable to recover it. 00:26:09.959 [2024-04-24 21:41:32.706033] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.959 [2024-04-24 21:41:32.706156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.959 [2024-04-24 21:41:32.706178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.959 [2024-04-24 21:41:32.706188] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.959 [2024-04-24 21:41:32.706197] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.959 [2024-04-24 21:41:32.706216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.959 qpair failed and we were unable to recover it. 00:26:09.959 [2024-04-24 21:41:32.716080] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.959 [2024-04-24 21:41:32.716208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.959 [2024-04-24 21:41:32.716227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.959 [2024-04-24 21:41:32.716237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.959 [2024-04-24 21:41:32.716246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.959 [2024-04-24 21:41:32.716265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.959 qpair failed and we were unable to recover it. 00:26:09.959 [2024-04-24 21:41:32.726080] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.959 [2024-04-24 21:41:32.726236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.959 [2024-04-24 21:41:32.726255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.959 [2024-04-24 21:41:32.726267] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.959 [2024-04-24 21:41:32.726277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.959 [2024-04-24 21:41:32.726297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.960 qpair failed and we were unable to recover it. 00:26:09.960 [2024-04-24 21:41:32.736123] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.960 [2024-04-24 21:41:32.736243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.960 [2024-04-24 21:41:32.736263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.960 [2024-04-24 21:41:32.736273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.960 [2024-04-24 21:41:32.736282] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.960 [2024-04-24 21:41:32.736303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.960 qpair failed and we were unable to recover it. 00:26:09.960 [2024-04-24 21:41:32.746073] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.960 [2024-04-24 21:41:32.746210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.960 [2024-04-24 21:41:32.746229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.960 [2024-04-24 21:41:32.746239] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.960 [2024-04-24 21:41:32.746252] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.960 [2024-04-24 21:41:32.746271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.960 qpair failed and we were unable to recover it. 00:26:09.960 [2024-04-24 21:41:32.756182] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.960 [2024-04-24 21:41:32.756312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.960 [2024-04-24 21:41:32.756330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.960 [2024-04-24 21:41:32.756340] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.960 [2024-04-24 21:41:32.756349] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.960 [2024-04-24 21:41:32.756368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.960 qpair failed and we were unable to recover it. 00:26:09.960 [2024-04-24 21:41:32.766233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.960 [2024-04-24 21:41:32.766356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.960 [2024-04-24 21:41:32.766375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.960 [2024-04-24 21:41:32.766385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.960 [2024-04-24 21:41:32.766394] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.960 [2024-04-24 21:41:32.766413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.960 qpair failed and we were unable to recover it. 00:26:09.960 [2024-04-24 21:41:32.776214] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.960 [2024-04-24 21:41:32.776337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.960 [2024-04-24 21:41:32.776356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.960 [2024-04-24 21:41:32.776366] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.960 [2024-04-24 21:41:32.776375] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.960 [2024-04-24 21:41:32.776394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.960 qpair failed and we were unable to recover it. 00:26:09.960 [2024-04-24 21:41:32.786254] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.960 [2024-04-24 21:41:32.786384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.960 [2024-04-24 21:41:32.786403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.960 [2024-04-24 21:41:32.786413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.960 [2024-04-24 21:41:32.786422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.960 [2024-04-24 21:41:32.786441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.960 qpair failed and we were unable to recover it. 00:26:09.960 [2024-04-24 21:41:32.796287] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.960 [2024-04-24 21:41:32.796421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.960 [2024-04-24 21:41:32.796440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.960 [2024-04-24 21:41:32.796455] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.960 [2024-04-24 21:41:32.796464] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.960 [2024-04-24 21:41:32.796484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.960 qpair failed and we were unable to recover it. 00:26:09.960 [2024-04-24 21:41:32.806309] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.960 [2024-04-24 21:41:32.806432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.960 [2024-04-24 21:41:32.806457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.960 [2024-04-24 21:41:32.806468] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.960 [2024-04-24 21:41:32.806477] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.960 [2024-04-24 21:41:32.806497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.960 qpair failed and we were unable to recover it. 00:26:09.960 [2024-04-24 21:41:32.816530] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.960 [2024-04-24 21:41:32.816657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.960 [2024-04-24 21:41:32.816676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.960 [2024-04-24 21:41:32.816686] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.960 [2024-04-24 21:41:32.816695] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.960 [2024-04-24 21:41:32.816716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.960 qpair failed and we were unable to recover it. 00:26:09.960 [2024-04-24 21:41:32.826380] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.960 [2024-04-24 21:41:32.826509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.960 [2024-04-24 21:41:32.826528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.960 [2024-04-24 21:41:32.826538] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.960 [2024-04-24 21:41:32.826547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.960 [2024-04-24 21:41:32.826566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.960 qpair failed and we were unable to recover it. 00:26:09.960 [2024-04-24 21:41:32.836339] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:09.960 [2024-04-24 21:41:32.836472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:09.960 [2024-04-24 21:41:32.836491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:09.960 [2024-04-24 21:41:32.836501] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:09.960 [2024-04-24 21:41:32.836512] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:09.960 [2024-04-24 21:41:32.836531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:09.960 qpair failed and we were unable to recover it. 00:26:10.220 [2024-04-24 21:41:32.846434] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.220 [2024-04-24 21:41:32.846568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.220 [2024-04-24 21:41:32.846590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.220 [2024-04-24 21:41:32.846600] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.220 [2024-04-24 21:41:32.846609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.220 [2024-04-24 21:41:32.846630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.220 qpair failed and we were unable to recover it. 00:26:10.220 [2024-04-24 21:41:32.856473] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.220 [2024-04-24 21:41:32.856596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.220 [2024-04-24 21:41:32.856616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.220 [2024-04-24 21:41:32.856627] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.220 [2024-04-24 21:41:32.856635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.220 [2024-04-24 21:41:32.856656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.220 qpair failed and we were unable to recover it. 00:26:10.220 [2024-04-24 21:41:32.866526] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.220 [2024-04-24 21:41:32.866655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.220 [2024-04-24 21:41:32.866675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.220 [2024-04-24 21:41:32.866685] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.220 [2024-04-24 21:41:32.866693] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.220 [2024-04-24 21:41:32.866713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.220 qpair failed and we were unable to recover it. 00:26:10.220 [2024-04-24 21:41:32.876447] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.220 [2024-04-24 21:41:32.876571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.220 [2024-04-24 21:41:32.876590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.220 [2024-04-24 21:41:32.876600] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.220 [2024-04-24 21:41:32.876609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.220 [2024-04-24 21:41:32.876629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.220 qpair failed and we were unable to recover it. 00:26:10.220 [2024-04-24 21:41:32.886597] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.220 [2024-04-24 21:41:32.886728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.220 [2024-04-24 21:41:32.886747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.220 [2024-04-24 21:41:32.886757] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.220 [2024-04-24 21:41:32.886765] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.220 [2024-04-24 21:41:32.886785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.220 qpair failed and we were unable to recover it. 00:26:10.220 [2024-04-24 21:41:32.896600] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.220 [2024-04-24 21:41:32.896724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.220 [2024-04-24 21:41:32.896742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.220 [2024-04-24 21:41:32.896752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.220 [2024-04-24 21:41:32.896761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.220 [2024-04-24 21:41:32.896781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.220 qpair failed and we were unable to recover it. 00:26:10.220 [2024-04-24 21:41:32.906606] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.220 [2024-04-24 21:41:32.906773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.220 [2024-04-24 21:41:32.906791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.220 [2024-04-24 21:41:32.906802] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.220 [2024-04-24 21:41:32.906812] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.220 [2024-04-24 21:41:32.906834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.220 qpair failed and we were unable to recover it. 00:26:10.220 [2024-04-24 21:41:32.916568] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.220 [2024-04-24 21:41:32.916691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.220 [2024-04-24 21:41:32.916710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.221 [2024-04-24 21:41:32.916720] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.221 [2024-04-24 21:41:32.916728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.221 [2024-04-24 21:41:32.916747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.221 qpair failed and we were unable to recover it. 00:26:10.221 [2024-04-24 21:41:32.926593] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.221 [2024-04-24 21:41:32.926717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.221 [2024-04-24 21:41:32.926738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.221 [2024-04-24 21:41:32.926752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.221 [2024-04-24 21:41:32.926761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.221 [2024-04-24 21:41:32.926781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.221 qpair failed and we were unable to recover it. 00:26:10.221 [2024-04-24 21:41:32.936615] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.221 [2024-04-24 21:41:32.936739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.221 [2024-04-24 21:41:32.936757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.221 [2024-04-24 21:41:32.936767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.221 [2024-04-24 21:41:32.936775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.221 [2024-04-24 21:41:32.936794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.221 qpair failed and we were unable to recover it. 00:26:10.221 [2024-04-24 21:41:32.946657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.221 [2024-04-24 21:41:32.946780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.221 [2024-04-24 21:41:32.946799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.221 [2024-04-24 21:41:32.946809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.221 [2024-04-24 21:41:32.946818] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.221 [2024-04-24 21:41:32.946837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.221 qpair failed and we were unable to recover it. 00:26:10.221 [2024-04-24 21:41:32.956740] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.221 [2024-04-24 21:41:32.956864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.221 [2024-04-24 21:41:32.956882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.221 [2024-04-24 21:41:32.956892] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.221 [2024-04-24 21:41:32.956901] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.221 [2024-04-24 21:41:32.956920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.221 qpair failed and we were unable to recover it. 00:26:10.221 [2024-04-24 21:41:32.966704] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.221 [2024-04-24 21:41:32.966824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.221 [2024-04-24 21:41:32.966842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.221 [2024-04-24 21:41:32.966852] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.221 [2024-04-24 21:41:32.966861] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.221 [2024-04-24 21:41:32.966880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.221 qpair failed and we were unable to recover it. 00:26:10.221 [2024-04-24 21:41:32.976795] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.221 [2024-04-24 21:41:32.976920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.221 [2024-04-24 21:41:32.976938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.221 [2024-04-24 21:41:32.976948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.221 [2024-04-24 21:41:32.976957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.221 [2024-04-24 21:41:32.976977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.221 qpair failed and we were unable to recover it. 00:26:10.221 [2024-04-24 21:41:32.986760] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.221 [2024-04-24 21:41:32.987070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.221 [2024-04-24 21:41:32.987090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.221 [2024-04-24 21:41:32.987099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.221 [2024-04-24 21:41:32.987108] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.221 [2024-04-24 21:41:32.987128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.221 qpair failed and we were unable to recover it. 00:26:10.221 [2024-04-24 21:41:32.996841] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.221 [2024-04-24 21:41:32.997161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.221 [2024-04-24 21:41:32.997181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.221 [2024-04-24 21:41:32.997191] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.221 [2024-04-24 21:41:32.997200] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.221 [2024-04-24 21:41:32.997220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.221 qpair failed and we were unable to recover it. 00:26:10.221 [2024-04-24 21:41:33.006886] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.221 [2024-04-24 21:41:33.007009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.221 [2024-04-24 21:41:33.007027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.221 [2024-04-24 21:41:33.007037] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.221 [2024-04-24 21:41:33.007046] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.221 [2024-04-24 21:41:33.007065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.221 qpair failed and we were unable to recover it. 00:26:10.221 [2024-04-24 21:41:33.016827] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.221 [2024-04-24 21:41:33.016952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.221 [2024-04-24 21:41:33.016974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.221 [2024-04-24 21:41:33.016984] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.221 [2024-04-24 21:41:33.016993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.221 [2024-04-24 21:41:33.017012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.221 qpair failed and we were unable to recover it. 00:26:10.221 [2024-04-24 21:41:33.026878] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.221 [2024-04-24 21:41:33.027010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.221 [2024-04-24 21:41:33.027029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.221 [2024-04-24 21:41:33.027039] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.221 [2024-04-24 21:41:33.027047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.221 [2024-04-24 21:41:33.027067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.221 qpair failed and we were unable to recover it. 00:26:10.221 [2024-04-24 21:41:33.036981] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.221 [2024-04-24 21:41:33.037108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.221 [2024-04-24 21:41:33.037127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.221 [2024-04-24 21:41:33.037137] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.222 [2024-04-24 21:41:33.037146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.222 [2024-04-24 21:41:33.037165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.222 qpair failed and we were unable to recover it. 00:26:10.222 [2024-04-24 21:41:33.046988] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.222 [2024-04-24 21:41:33.047107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.222 [2024-04-24 21:41:33.047125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.222 [2024-04-24 21:41:33.047135] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.222 [2024-04-24 21:41:33.047144] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.222 [2024-04-24 21:41:33.047164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.222 qpair failed and we were unable to recover it. 00:26:10.222 [2024-04-24 21:41:33.057024] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.222 [2024-04-24 21:41:33.057148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.222 [2024-04-24 21:41:33.057166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.222 [2024-04-24 21:41:33.057176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.222 [2024-04-24 21:41:33.057185] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.222 [2024-04-24 21:41:33.057207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.222 qpair failed and we were unable to recover it. 00:26:10.222 [2024-04-24 21:41:33.067056] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.222 [2024-04-24 21:41:33.067177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.222 [2024-04-24 21:41:33.067196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.222 [2024-04-24 21:41:33.067206] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.222 [2024-04-24 21:41:33.067215] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.222 [2024-04-24 21:41:33.067234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.222 qpair failed and we were unable to recover it. 00:26:10.222 [2024-04-24 21:41:33.077052] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.222 [2024-04-24 21:41:33.077182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.222 [2024-04-24 21:41:33.077201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.222 [2024-04-24 21:41:33.077210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.222 [2024-04-24 21:41:33.077219] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.222 [2024-04-24 21:41:33.077238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.222 qpair failed and we were unable to recover it. 00:26:10.222 [2024-04-24 21:41:33.087106] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.222 [2024-04-24 21:41:33.087228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.222 [2024-04-24 21:41:33.087246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.222 [2024-04-24 21:41:33.087256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.222 [2024-04-24 21:41:33.087264] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.222 [2024-04-24 21:41:33.087283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.222 qpair failed and we were unable to recover it. 00:26:10.222 [2024-04-24 21:41:33.097162] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.222 [2024-04-24 21:41:33.097290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.222 [2024-04-24 21:41:33.097309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.222 [2024-04-24 21:41:33.097318] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.222 [2024-04-24 21:41:33.097327] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.222 [2024-04-24 21:41:33.097346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.222 qpair failed and we were unable to recover it. 00:26:10.481 [2024-04-24 21:41:33.107171] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.481 [2024-04-24 21:41:33.107298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.481 [2024-04-24 21:41:33.107323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.481 [2024-04-24 21:41:33.107334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.481 [2024-04-24 21:41:33.107343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.481 [2024-04-24 21:41:33.107363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.481 qpair failed and we were unable to recover it. 00:26:10.481 [2024-04-24 21:41:33.117201] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.481 [2024-04-24 21:41:33.117324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.481 [2024-04-24 21:41:33.117345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.481 [2024-04-24 21:41:33.117356] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.481 [2024-04-24 21:41:33.117364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.481 [2024-04-24 21:41:33.117385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.481 qpair failed and we were unable to recover it. 00:26:10.481 [2024-04-24 21:41:33.127208] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.481 [2024-04-24 21:41:33.127333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.481 [2024-04-24 21:41:33.127352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.481 [2024-04-24 21:41:33.127362] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.481 [2024-04-24 21:41:33.127371] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.481 [2024-04-24 21:41:33.127390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.481 qpair failed and we were unable to recover it. 00:26:10.481 [2024-04-24 21:41:33.137240] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.481 [2024-04-24 21:41:33.137368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.481 [2024-04-24 21:41:33.137387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.481 [2024-04-24 21:41:33.137397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.481 [2024-04-24 21:41:33.137406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.481 [2024-04-24 21:41:33.137426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.481 qpair failed and we were unable to recover it. 00:26:10.481 [2024-04-24 21:41:33.147213] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.481 [2024-04-24 21:41:33.147338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.481 [2024-04-24 21:41:33.147356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.481 [2024-04-24 21:41:33.147367] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.481 [2024-04-24 21:41:33.147378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.481 [2024-04-24 21:41:33.147398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.481 qpair failed and we were unable to recover it. 00:26:10.481 [2024-04-24 21:41:33.157286] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.481 [2024-04-24 21:41:33.157411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.481 [2024-04-24 21:41:33.157430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.481 [2024-04-24 21:41:33.157440] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.481 [2024-04-24 21:41:33.157455] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.481 [2024-04-24 21:41:33.157476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.481 qpair failed and we were unable to recover it. 00:26:10.481 [2024-04-24 21:41:33.167308] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.481 [2024-04-24 21:41:33.167432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.481 [2024-04-24 21:41:33.167457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.481 [2024-04-24 21:41:33.167467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.481 [2024-04-24 21:41:33.167476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.481 [2024-04-24 21:41:33.167495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.481 qpair failed and we were unable to recover it. 00:26:10.481 [2024-04-24 21:41:33.177349] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.481 [2024-04-24 21:41:33.177479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.481 [2024-04-24 21:41:33.177498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.481 [2024-04-24 21:41:33.177508] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.481 [2024-04-24 21:41:33.177517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.481 [2024-04-24 21:41:33.177536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.481 qpair failed and we were unable to recover it. 00:26:10.481 [2024-04-24 21:41:33.187319] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.481 [2024-04-24 21:41:33.187446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.481 [2024-04-24 21:41:33.187470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.481 [2024-04-24 21:41:33.187480] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.481 [2024-04-24 21:41:33.187489] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.481 [2024-04-24 21:41:33.187509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.481 qpair failed and we were unable to recover it. 00:26:10.481 [2024-04-24 21:41:33.197405] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.481 [2024-04-24 21:41:33.197543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.481 [2024-04-24 21:41:33.197562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.481 [2024-04-24 21:41:33.197572] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.481 [2024-04-24 21:41:33.197580] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.481 [2024-04-24 21:41:33.197600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.482 qpair failed and we were unable to recover it. 00:26:10.482 [2024-04-24 21:41:33.207431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.482 [2024-04-24 21:41:33.207555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.482 [2024-04-24 21:41:33.207574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.482 [2024-04-24 21:41:33.207584] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.482 [2024-04-24 21:41:33.207592] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.482 [2024-04-24 21:41:33.207612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.482 qpair failed and we were unable to recover it. 00:26:10.482 [2024-04-24 21:41:33.217391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.482 [2024-04-24 21:41:33.217527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.482 [2024-04-24 21:41:33.217545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.482 [2024-04-24 21:41:33.217555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.482 [2024-04-24 21:41:33.217564] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.482 [2024-04-24 21:41:33.217583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.482 qpair failed and we were unable to recover it. 00:26:10.482 [2024-04-24 21:41:33.227483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.482 [2024-04-24 21:41:33.227608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.482 [2024-04-24 21:41:33.227626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.482 [2024-04-24 21:41:33.227636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.482 [2024-04-24 21:41:33.227644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.482 [2024-04-24 21:41:33.227664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.482 qpair failed and we were unable to recover it. 00:26:10.482 [2024-04-24 21:41:33.237501] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.482 [2024-04-24 21:41:33.237624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.482 [2024-04-24 21:41:33.237642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.482 [2024-04-24 21:41:33.237652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.482 [2024-04-24 21:41:33.237665] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.482 [2024-04-24 21:41:33.237684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.482 qpair failed and we were unable to recover it. 00:26:10.482 [2024-04-24 21:41:33.247537] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.482 [2024-04-24 21:41:33.247661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.482 [2024-04-24 21:41:33.247680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.482 [2024-04-24 21:41:33.247690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.482 [2024-04-24 21:41:33.247698] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.482 [2024-04-24 21:41:33.247718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.482 qpair failed and we were unable to recover it. 00:26:10.482 [2024-04-24 21:41:33.257488] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.482 [2024-04-24 21:41:33.257654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.482 [2024-04-24 21:41:33.257673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.482 [2024-04-24 21:41:33.257682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.482 [2024-04-24 21:41:33.257691] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.482 [2024-04-24 21:41:33.257711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.482 qpair failed and we were unable to recover it. 00:26:10.482 [2024-04-24 21:41:33.267529] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.482 [2024-04-24 21:41:33.267653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.482 [2024-04-24 21:41:33.267671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.482 [2024-04-24 21:41:33.267681] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.482 [2024-04-24 21:41:33.267690] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.482 [2024-04-24 21:41:33.267708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.482 qpair failed and we were unable to recover it. 00:26:10.482 [2024-04-24 21:41:33.277555] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.482 [2024-04-24 21:41:33.277722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.482 [2024-04-24 21:41:33.277741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.482 [2024-04-24 21:41:33.277751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.482 [2024-04-24 21:41:33.277760] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.482 [2024-04-24 21:41:33.277780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.482 qpair failed and we were unable to recover it. 00:26:10.482 [2024-04-24 21:41:33.287636] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.482 [2024-04-24 21:41:33.287763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.482 [2024-04-24 21:41:33.287781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.482 [2024-04-24 21:41:33.287791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.482 [2024-04-24 21:41:33.287800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.482 [2024-04-24 21:41:33.287819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.482 qpair failed and we were unable to recover it. 00:26:10.482 [2024-04-24 21:41:33.297672] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.482 [2024-04-24 21:41:33.297800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.482 [2024-04-24 21:41:33.297819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.482 [2024-04-24 21:41:33.297829] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.482 [2024-04-24 21:41:33.297837] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.482 [2024-04-24 21:41:33.297856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.482 qpair failed and we were unable to recover it. 00:26:10.482 [2024-04-24 21:41:33.307672] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.482 [2024-04-24 21:41:33.307801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.482 [2024-04-24 21:41:33.307819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.482 [2024-04-24 21:41:33.307829] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.482 [2024-04-24 21:41:33.307838] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.482 [2024-04-24 21:41:33.307857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.482 qpair failed and we were unable to recover it. 00:26:10.482 [2024-04-24 21:41:33.317704] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.482 [2024-04-24 21:41:33.317879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.482 [2024-04-24 21:41:33.317897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.482 [2024-04-24 21:41:33.317907] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.482 [2024-04-24 21:41:33.317915] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.482 [2024-04-24 21:41:33.317935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.482 qpair failed and we were unable to recover it. 00:26:10.482 [2024-04-24 21:41:33.327799] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.482 [2024-04-24 21:41:33.327925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.482 [2024-04-24 21:41:33.327943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.482 [2024-04-24 21:41:33.327956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.482 [2024-04-24 21:41:33.327965] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.482 [2024-04-24 21:41:33.327984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.482 qpair failed and we were unable to recover it. 00:26:10.482 [2024-04-24 21:41:33.337785] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.482 [2024-04-24 21:41:33.337907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.483 [2024-04-24 21:41:33.337926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.483 [2024-04-24 21:41:33.337935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.483 [2024-04-24 21:41:33.337944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.483 [2024-04-24 21:41:33.337963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.483 qpair failed and we were unable to recover it. 00:26:10.483 [2024-04-24 21:41:33.347829] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.483 [2024-04-24 21:41:33.347951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.483 [2024-04-24 21:41:33.347970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.483 [2024-04-24 21:41:33.347979] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.483 [2024-04-24 21:41:33.347988] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.483 [2024-04-24 21:41:33.348007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.483 qpair failed and we were unable to recover it. 00:26:10.483 [2024-04-24 21:41:33.357836] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.483 [2024-04-24 21:41:33.357960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.483 [2024-04-24 21:41:33.357978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.483 [2024-04-24 21:41:33.357988] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.483 [2024-04-24 21:41:33.357997] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.483 [2024-04-24 21:41:33.358016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.483 qpair failed and we were unable to recover it. 00:26:10.742 [2024-04-24 21:41:33.367864] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.742 [2024-04-24 21:41:33.367991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.742 [2024-04-24 21:41:33.368012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.742 [2024-04-24 21:41:33.368022] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.742 [2024-04-24 21:41:33.368031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.742 [2024-04-24 21:41:33.368052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.742 qpair failed and we were unable to recover it. 00:26:10.742 [2024-04-24 21:41:33.377897] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.742 [2024-04-24 21:41:33.378035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.742 [2024-04-24 21:41:33.378056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.742 [2024-04-24 21:41:33.378067] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.742 [2024-04-24 21:41:33.378075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.742 [2024-04-24 21:41:33.378095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.742 qpair failed and we were unable to recover it. 00:26:10.742 [2024-04-24 21:41:33.387957] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.742 [2024-04-24 21:41:33.388081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.742 [2024-04-24 21:41:33.388100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.742 [2024-04-24 21:41:33.388110] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.742 [2024-04-24 21:41:33.388119] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.742 [2024-04-24 21:41:33.388139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.742 qpair failed and we were unable to recover it. 00:26:10.742 [2024-04-24 21:41:33.397935] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.742 [2024-04-24 21:41:33.398062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.742 [2024-04-24 21:41:33.398081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.742 [2024-04-24 21:41:33.398090] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.742 [2024-04-24 21:41:33.398099] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.742 [2024-04-24 21:41:33.398118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.742 qpair failed and we were unable to recover it. 00:26:10.742 [2024-04-24 21:41:33.407963] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.742 [2024-04-24 21:41:33.408273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.742 [2024-04-24 21:41:33.408294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.742 [2024-04-24 21:41:33.408303] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.742 [2024-04-24 21:41:33.408312] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.742 [2024-04-24 21:41:33.408332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.742 qpair failed and we were unable to recover it. 00:26:10.742 [2024-04-24 21:41:33.417996] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.742 [2024-04-24 21:41:33.418120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.742 [2024-04-24 21:41:33.418144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.742 [2024-04-24 21:41:33.418154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.742 [2024-04-24 21:41:33.418163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:10.742 [2024-04-24 21:41:33.418183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:10.742 qpair failed and we were unable to recover it. 00:26:10.742 [2024-04-24 21:41:33.428075] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.742 [2024-04-24 21:41:33.428236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.742 [2024-04-24 21:41:33.428269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.742 [2024-04-24 21:41:33.428284] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.742 [2024-04-24 21:41:33.428297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.742 [2024-04-24 21:41:33.428325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.742 qpair failed and we were unable to recover it. 00:26:10.742 [2024-04-24 21:41:33.437995] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.742 [2024-04-24 21:41:33.438119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.742 [2024-04-24 21:41:33.438139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.742 [2024-04-24 21:41:33.438149] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.742 [2024-04-24 21:41:33.438158] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.742 [2024-04-24 21:41:33.438176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.742 qpair failed and we were unable to recover it. 00:26:10.742 [2024-04-24 21:41:33.448027] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.742 [2024-04-24 21:41:33.448149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.742 [2024-04-24 21:41:33.448169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.742 [2024-04-24 21:41:33.448179] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.742 [2024-04-24 21:41:33.448188] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.742 [2024-04-24 21:41:33.448207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.742 qpair failed and we were unable to recover it. 00:26:10.742 [2024-04-24 21:41:33.458117] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.742 [2024-04-24 21:41:33.458256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.742 [2024-04-24 21:41:33.458275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.742 [2024-04-24 21:41:33.458285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.742 [2024-04-24 21:41:33.458294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.742 [2024-04-24 21:41:33.458313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.742 qpair failed and we were unable to recover it. 00:26:10.743 [2024-04-24 21:41:33.468139] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.743 [2024-04-24 21:41:33.468264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.743 [2024-04-24 21:41:33.468283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.743 [2024-04-24 21:41:33.468293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.743 [2024-04-24 21:41:33.468302] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.743 [2024-04-24 21:41:33.468321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.743 qpair failed and we were unable to recover it. 00:26:10.743 [2024-04-24 21:41:33.478171] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.743 [2024-04-24 21:41:33.478297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.743 [2024-04-24 21:41:33.478315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.743 [2024-04-24 21:41:33.478326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.743 [2024-04-24 21:41:33.478335] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.743 [2024-04-24 21:41:33.478353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.743 qpair failed and we were unable to recover it. 00:26:10.743 [2024-04-24 21:41:33.488190] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.743 [2024-04-24 21:41:33.488326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.743 [2024-04-24 21:41:33.488346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.743 [2024-04-24 21:41:33.488356] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.743 [2024-04-24 21:41:33.488365] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.743 [2024-04-24 21:41:33.488383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.743 qpair failed and we were unable to recover it. 00:26:10.743 [2024-04-24 21:41:33.498214] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.743 [2024-04-24 21:41:33.498354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.743 [2024-04-24 21:41:33.498373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.743 [2024-04-24 21:41:33.498383] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.743 [2024-04-24 21:41:33.498392] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.743 [2024-04-24 21:41:33.498410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.743 qpair failed and we were unable to recover it. 00:26:10.743 [2024-04-24 21:41:33.508292] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.743 [2024-04-24 21:41:33.508463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.743 [2024-04-24 21:41:33.508486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.743 [2024-04-24 21:41:33.508496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.743 [2024-04-24 21:41:33.508504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.743 [2024-04-24 21:41:33.508523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.743 qpair failed and we were unable to recover it. 00:26:10.743 [2024-04-24 21:41:33.518269] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.743 [2024-04-24 21:41:33.518399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.743 [2024-04-24 21:41:33.518418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.743 [2024-04-24 21:41:33.518429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.743 [2024-04-24 21:41:33.518437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.743 [2024-04-24 21:41:33.518460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.743 qpair failed and we were unable to recover it. 00:26:10.743 [2024-04-24 21:41:33.528298] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.743 [2024-04-24 21:41:33.528420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.743 [2024-04-24 21:41:33.528439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.743 [2024-04-24 21:41:33.528454] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.743 [2024-04-24 21:41:33.528463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.743 [2024-04-24 21:41:33.528482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.743 qpair failed and we were unable to recover it. 00:26:10.743 [2024-04-24 21:41:33.538251] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.743 [2024-04-24 21:41:33.538384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.743 [2024-04-24 21:41:33.538403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.743 [2024-04-24 21:41:33.538413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.743 [2024-04-24 21:41:33.538421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.743 [2024-04-24 21:41:33.538439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.743 qpair failed and we were unable to recover it. 00:26:10.743 [2024-04-24 21:41:33.548367] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.743 [2024-04-24 21:41:33.548508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.743 [2024-04-24 21:41:33.548527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.743 [2024-04-24 21:41:33.548537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.743 [2024-04-24 21:41:33.548546] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.743 [2024-04-24 21:41:33.548567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.743 qpair failed and we were unable to recover it. 00:26:10.743 [2024-04-24 21:41:33.558394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.743 [2024-04-24 21:41:33.558521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.743 [2024-04-24 21:41:33.558542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.743 [2024-04-24 21:41:33.558554] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.743 [2024-04-24 21:41:33.558562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.743 [2024-04-24 21:41:33.558581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.743 qpair failed and we were unable to recover it. 00:26:10.743 [2024-04-24 21:41:33.568341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.743 [2024-04-24 21:41:33.568480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.743 [2024-04-24 21:41:33.568499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.743 [2024-04-24 21:41:33.568510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.743 [2024-04-24 21:41:33.568518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.743 [2024-04-24 21:41:33.568537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.743 qpair failed and we were unable to recover it. 00:26:10.743 [2024-04-24 21:41:33.578455] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.743 [2024-04-24 21:41:33.578581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.743 [2024-04-24 21:41:33.578601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.743 [2024-04-24 21:41:33.578611] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.743 [2024-04-24 21:41:33.578619] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.743 [2024-04-24 21:41:33.578638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.743 qpair failed and we were unable to recover it. 00:26:10.743 [2024-04-24 21:41:33.588485] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.743 [2024-04-24 21:41:33.588803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.743 [2024-04-24 21:41:33.588823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.743 [2024-04-24 21:41:33.588833] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.743 [2024-04-24 21:41:33.588842] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.743 [2024-04-24 21:41:33.588860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.743 qpair failed and we were unable to recover it. 00:26:10.743 [2024-04-24 21:41:33.598502] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.743 [2024-04-24 21:41:33.598632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.743 [2024-04-24 21:41:33.598654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.744 [2024-04-24 21:41:33.598664] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.744 [2024-04-24 21:41:33.598672] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.744 [2024-04-24 21:41:33.598691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.744 qpair failed and we were unable to recover it. 00:26:10.744 [2024-04-24 21:41:33.608566] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.744 [2024-04-24 21:41:33.608692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.744 [2024-04-24 21:41:33.608711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.744 [2024-04-24 21:41:33.608721] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.744 [2024-04-24 21:41:33.608730] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.744 [2024-04-24 21:41:33.608748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.744 qpair failed and we were unable to recover it. 00:26:10.744 [2024-04-24 21:41:33.618547] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:10.744 [2024-04-24 21:41:33.618672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:10.744 [2024-04-24 21:41:33.618690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:10.744 [2024-04-24 21:41:33.618700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:10.744 [2024-04-24 21:41:33.618709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:10.744 [2024-04-24 21:41:33.618727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:10.744 qpair failed and we were unable to recover it. 00:26:11.004 [2024-04-24 21:41:33.628513] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.004 [2024-04-24 21:41:33.628638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.004 [2024-04-24 21:41:33.628657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.004 [2024-04-24 21:41:33.628667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.004 [2024-04-24 21:41:33.628676] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.004 [2024-04-24 21:41:33.628694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-04-24 21:41:33.638606] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.004 [2024-04-24 21:41:33.638750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.004 [2024-04-24 21:41:33.638769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.004 [2024-04-24 21:41:33.638778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.004 [2024-04-24 21:41:33.638787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.004 [2024-04-24 21:41:33.638808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-04-24 21:41:33.648642] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.004 [2024-04-24 21:41:33.648770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.004 [2024-04-24 21:41:33.648789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.004 [2024-04-24 21:41:33.648799] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.004 [2024-04-24 21:41:33.648807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.004 [2024-04-24 21:41:33.648826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-04-24 21:41:33.658668] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.004 [2024-04-24 21:41:33.658796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.004 [2024-04-24 21:41:33.658815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.004 [2024-04-24 21:41:33.658825] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.004 [2024-04-24 21:41:33.658833] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.004 [2024-04-24 21:41:33.658851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-04-24 21:41:33.668698] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.004 [2024-04-24 21:41:33.668825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.004 [2024-04-24 21:41:33.668844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.004 [2024-04-24 21:41:33.668853] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.004 [2024-04-24 21:41:33.668862] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.004 [2024-04-24 21:41:33.668880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-04-24 21:41:33.678773] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.004 [2024-04-24 21:41:33.678907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.004 [2024-04-24 21:41:33.678926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.004 [2024-04-24 21:41:33.678936] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.004 [2024-04-24 21:41:33.678944] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.004 [2024-04-24 21:41:33.678963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-04-24 21:41:33.688761] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.004 [2024-04-24 21:41:33.688886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.004 [2024-04-24 21:41:33.688909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.004 [2024-04-24 21:41:33.688919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.004 [2024-04-24 21:41:33.688927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.004 [2024-04-24 21:41:33.688946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-04-24 21:41:33.698799] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.004 [2024-04-24 21:41:33.698918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.004 [2024-04-24 21:41:33.698937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.004 [2024-04-24 21:41:33.698947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.004 [2024-04-24 21:41:33.698955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.004 [2024-04-24 21:41:33.698974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-04-24 21:41:33.708809] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.004 [2024-04-24 21:41:33.708936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.004 [2024-04-24 21:41:33.708954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.004 [2024-04-24 21:41:33.708964] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.004 [2024-04-24 21:41:33.708973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.004 [2024-04-24 21:41:33.708991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-04-24 21:41:33.718889] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.004 [2024-04-24 21:41:33.719016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.004 [2024-04-24 21:41:33.719035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.004 [2024-04-24 21:41:33.719045] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.004 [2024-04-24 21:41:33.719053] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.004 [2024-04-24 21:41:33.719071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-04-24 21:41:33.728847] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.004 [2024-04-24 21:41:33.728971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.004 [2024-04-24 21:41:33.728990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.004 [2024-04-24 21:41:33.728999] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.004 [2024-04-24 21:41:33.729011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.004 [2024-04-24 21:41:33.729030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-04-24 21:41:33.738811] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.004 [2024-04-24 21:41:33.739113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.004 [2024-04-24 21:41:33.739134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.004 [2024-04-24 21:41:33.739144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.004 [2024-04-24 21:41:33.739153] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.004 [2024-04-24 21:41:33.739171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.004 qpair failed and we were unable to recover it. 00:26:11.004 [2024-04-24 21:41:33.748850] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.004 [2024-04-24 21:41:33.749015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.004 [2024-04-24 21:41:33.749033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.004 [2024-04-24 21:41:33.749043] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.005 [2024-04-24 21:41:33.749052] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.005 [2024-04-24 21:41:33.749071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-04-24 21:41:33.758921] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.005 [2024-04-24 21:41:33.759057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.005 [2024-04-24 21:41:33.759077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.005 [2024-04-24 21:41:33.759087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.005 [2024-04-24 21:41:33.759095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.005 [2024-04-24 21:41:33.759114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-04-24 21:41:33.768962] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.005 [2024-04-24 21:41:33.769086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.005 [2024-04-24 21:41:33.769104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.005 [2024-04-24 21:41:33.769114] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.005 [2024-04-24 21:41:33.769123] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.005 [2024-04-24 21:41:33.769141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-04-24 21:41:33.779003] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.005 [2024-04-24 21:41:33.779127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.005 [2024-04-24 21:41:33.779147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.005 [2024-04-24 21:41:33.779156] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.005 [2024-04-24 21:41:33.779165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.005 [2024-04-24 21:41:33.779184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-04-24 21:41:33.789040] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.005 [2024-04-24 21:41:33.789162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.005 [2024-04-24 21:41:33.789181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.005 [2024-04-24 21:41:33.789191] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.005 [2024-04-24 21:41:33.789199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.005 [2024-04-24 21:41:33.789218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-04-24 21:41:33.798990] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.005 [2024-04-24 21:41:33.799118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.005 [2024-04-24 21:41:33.799137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.005 [2024-04-24 21:41:33.799147] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.005 [2024-04-24 21:41:33.799155] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.005 [2024-04-24 21:41:33.799173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-04-24 21:41:33.809029] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.005 [2024-04-24 21:41:33.809152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.005 [2024-04-24 21:41:33.809170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.005 [2024-04-24 21:41:33.809181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.005 [2024-04-24 21:41:33.809189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.005 [2024-04-24 21:41:33.809208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-04-24 21:41:33.819114] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.005 [2024-04-24 21:41:33.819237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.005 [2024-04-24 21:41:33.819258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.005 [2024-04-24 21:41:33.819268] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.005 [2024-04-24 21:41:33.819281] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.005 [2024-04-24 21:41:33.819300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-04-24 21:41:33.829084] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.005 [2024-04-24 21:41:33.829401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.005 [2024-04-24 21:41:33.829421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.005 [2024-04-24 21:41:33.829431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.005 [2024-04-24 21:41:33.829439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.005 [2024-04-24 21:41:33.829463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-04-24 21:41:33.839089] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.005 [2024-04-24 21:41:33.839257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.005 [2024-04-24 21:41:33.839276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.005 [2024-04-24 21:41:33.839285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.005 [2024-04-24 21:41:33.839294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.005 [2024-04-24 21:41:33.839313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-04-24 21:41:33.849117] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.005 [2024-04-24 21:41:33.849246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.005 [2024-04-24 21:41:33.849265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.005 [2024-04-24 21:41:33.849275] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.005 [2024-04-24 21:41:33.849284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.005 [2024-04-24 21:41:33.849302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-04-24 21:41:33.859225] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.005 [2024-04-24 21:41:33.859346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.005 [2024-04-24 21:41:33.859366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.005 [2024-04-24 21:41:33.859375] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.005 [2024-04-24 21:41:33.859384] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.005 [2024-04-24 21:41:33.859402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-04-24 21:41:33.869193] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.005 [2024-04-24 21:41:33.869322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.005 [2024-04-24 21:41:33.869341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.005 [2024-04-24 21:41:33.869351] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.005 [2024-04-24 21:41:33.869359] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.005 [2024-04-24 21:41:33.869378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-04-24 21:41:33.879265] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.005 [2024-04-24 21:41:33.879433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.005 [2024-04-24 21:41:33.879456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.005 [2024-04-24 21:41:33.879467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.005 [2024-04-24 21:41:33.879476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.005 [2024-04-24 21:41:33.879495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.005 qpair failed and we were unable to recover it. 00:26:11.005 [2024-04-24 21:41:33.889317] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.006 [2024-04-24 21:41:33.889438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.006 [2024-04-24 21:41:33.889461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.006 [2024-04-24 21:41:33.889472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.006 [2024-04-24 21:41:33.889481] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.006 [2024-04-24 21:41:33.889500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.006 qpair failed and we were unable to recover it. 00:26:11.265 [2024-04-24 21:41:33.899348] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.265 [2024-04-24 21:41:33.899500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.265 [2024-04-24 21:41:33.899518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.265 [2024-04-24 21:41:33.899528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.265 [2024-04-24 21:41:33.899537] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.265 [2024-04-24 21:41:33.899555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.265 qpair failed and we were unable to recover it. 00:26:11.265 [2024-04-24 21:41:33.909342] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.265 [2024-04-24 21:41:33.909503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.265 [2024-04-24 21:41:33.909522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.265 [2024-04-24 21:41:33.909532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.265 [2024-04-24 21:41:33.909545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.265 [2024-04-24 21:41:33.909563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.265 qpair failed and we were unable to recover it. 00:26:11.265 [2024-04-24 21:41:33.919401] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.265 [2024-04-24 21:41:33.919716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.265 [2024-04-24 21:41:33.919734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.265 [2024-04-24 21:41:33.919744] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.265 [2024-04-24 21:41:33.919753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.265 [2024-04-24 21:41:33.919771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.265 qpair failed and we were unable to recover it. 00:26:11.265 [2024-04-24 21:41:33.929432] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.265 [2024-04-24 21:41:33.929566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.265 [2024-04-24 21:41:33.929585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.265 [2024-04-24 21:41:33.929595] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.265 [2024-04-24 21:41:33.929603] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.265 [2024-04-24 21:41:33.929622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.265 qpair failed and we were unable to recover it. 00:26:11.265 [2024-04-24 21:41:33.939468] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.265 [2024-04-24 21:41:33.939599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.265 [2024-04-24 21:41:33.939619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.265 [2024-04-24 21:41:33.939629] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.265 [2024-04-24 21:41:33.939638] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.265 [2024-04-24 21:41:33.939656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.265 qpair failed and we were unable to recover it. 00:26:11.265 [2024-04-24 21:41:33.949503] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.265 [2024-04-24 21:41:33.949628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.265 [2024-04-24 21:41:33.949647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.265 [2024-04-24 21:41:33.949658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.265 [2024-04-24 21:41:33.949667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.265 [2024-04-24 21:41:33.949685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.265 qpair failed and we were unable to recover it. 00:26:11.265 [2024-04-24 21:41:33.959485] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.265 [2024-04-24 21:41:33.959616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.265 [2024-04-24 21:41:33.959634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.265 [2024-04-24 21:41:33.959645] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.265 [2024-04-24 21:41:33.959653] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.265 [2024-04-24 21:41:33.959672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.265 qpair failed and we were unable to recover it. 00:26:11.265 [2024-04-24 21:41:33.969520] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.265 [2024-04-24 21:41:33.969654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.265 [2024-04-24 21:41:33.969673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.265 [2024-04-24 21:41:33.969683] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.265 [2024-04-24 21:41:33.969691] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.265 [2024-04-24 21:41:33.969710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.265 qpair failed and we were unable to recover it. 00:26:11.265 [2024-04-24 21:41:33.979565] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.265 [2024-04-24 21:41:33.979689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.265 [2024-04-24 21:41:33.979708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.265 [2024-04-24 21:41:33.979718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.265 [2024-04-24 21:41:33.979727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.265 [2024-04-24 21:41:33.979746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.265 qpair failed and we were unable to recover it. 00:26:11.265 [2024-04-24 21:41:33.989600] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.265 [2024-04-24 21:41:33.989730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.265 [2024-04-24 21:41:33.989749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.266 [2024-04-24 21:41:33.989759] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.266 [2024-04-24 21:41:33.989767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.266 [2024-04-24 21:41:33.989786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.266 qpair failed and we were unable to recover it. 00:26:11.266 [2024-04-24 21:41:33.999627] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.266 [2024-04-24 21:41:33.999755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.266 [2024-04-24 21:41:33.999774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.266 [2024-04-24 21:41:33.999787] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.266 [2024-04-24 21:41:33.999796] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.266 [2024-04-24 21:41:33.999814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.266 qpair failed and we were unable to recover it. 00:26:11.266 [2024-04-24 21:41:34.009671] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.266 [2024-04-24 21:41:34.009793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.266 [2024-04-24 21:41:34.009812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.266 [2024-04-24 21:41:34.009822] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.266 [2024-04-24 21:41:34.009831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.266 [2024-04-24 21:41:34.009849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.266 qpair failed and we were unable to recover it. 00:26:11.266 [2024-04-24 21:41:34.019699] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.266 [2024-04-24 21:41:34.019824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.266 [2024-04-24 21:41:34.019844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.266 [2024-04-24 21:41:34.019853] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.266 [2024-04-24 21:41:34.019862] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.266 [2024-04-24 21:41:34.019881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.266 qpair failed and we were unable to recover it. 00:26:11.266 [2024-04-24 21:41:34.029718] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.266 [2024-04-24 21:41:34.029843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.266 [2024-04-24 21:41:34.029862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.266 [2024-04-24 21:41:34.029871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.266 [2024-04-24 21:41:34.029880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.266 [2024-04-24 21:41:34.029898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.266 qpair failed and we were unable to recover it. 00:26:11.266 [2024-04-24 21:41:34.039766] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.266 [2024-04-24 21:41:34.039892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.266 [2024-04-24 21:41:34.039911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.266 [2024-04-24 21:41:34.039922] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.266 [2024-04-24 21:41:34.039930] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.266 [2024-04-24 21:41:34.039948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.266 qpair failed and we were unable to recover it. 00:26:11.266 [2024-04-24 21:41:34.049778] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.266 [2024-04-24 21:41:34.049902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.266 [2024-04-24 21:41:34.049920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.266 [2024-04-24 21:41:34.049930] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.266 [2024-04-24 21:41:34.049939] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.266 [2024-04-24 21:41:34.049957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.266 qpair failed and we were unable to recover it. 00:26:11.266 [2024-04-24 21:41:34.059803] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.266 [2024-04-24 21:41:34.059924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.266 [2024-04-24 21:41:34.059943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.266 [2024-04-24 21:41:34.059954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.266 [2024-04-24 21:41:34.059962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.266 [2024-04-24 21:41:34.059981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.266 qpair failed and we were unable to recover it. 00:26:11.266 [2024-04-24 21:41:34.069828] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.266 [2024-04-24 21:41:34.069980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.266 [2024-04-24 21:41:34.069999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.266 [2024-04-24 21:41:34.070008] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.266 [2024-04-24 21:41:34.070017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.266 [2024-04-24 21:41:34.070035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.266 qpair failed and we were unable to recover it. 00:26:11.266 [2024-04-24 21:41:34.079855] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.266 [2024-04-24 21:41:34.079981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.266 [2024-04-24 21:41:34.080001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.266 [2024-04-24 21:41:34.080011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.266 [2024-04-24 21:41:34.080019] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.266 [2024-04-24 21:41:34.080037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.266 qpair failed and we were unable to recover it. 00:26:11.266 [2024-04-24 21:41:34.089904] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.266 [2024-04-24 21:41:34.090024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.266 [2024-04-24 21:41:34.090042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.266 [2024-04-24 21:41:34.090055] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.266 [2024-04-24 21:41:34.090064] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.266 [2024-04-24 21:41:34.090083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.266 qpair failed and we were unable to recover it. 00:26:11.266 [2024-04-24 21:41:34.099884] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.266 [2024-04-24 21:41:34.100012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.266 [2024-04-24 21:41:34.100032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.266 [2024-04-24 21:41:34.100041] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.266 [2024-04-24 21:41:34.100050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.266 [2024-04-24 21:41:34.100068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.266 qpair failed and we were unable to recover it. 00:26:11.266 [2024-04-24 21:41:34.109982] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.266 [2024-04-24 21:41:34.110105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.266 [2024-04-24 21:41:34.110124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.266 [2024-04-24 21:41:34.110134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.266 [2024-04-24 21:41:34.110143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.266 [2024-04-24 21:41:34.110161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.266 qpair failed and we were unable to recover it. 00:26:11.266 [2024-04-24 21:41:34.119973] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.266 [2024-04-24 21:41:34.120105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.266 [2024-04-24 21:41:34.120123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.267 [2024-04-24 21:41:34.120133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.267 [2024-04-24 21:41:34.120142] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.267 [2024-04-24 21:41:34.120160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.267 qpair failed and we were unable to recover it. 00:26:11.267 [2024-04-24 21:41:34.130001] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.267 [2024-04-24 21:41:34.130134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.267 [2024-04-24 21:41:34.130152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.267 [2024-04-24 21:41:34.130162] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.267 [2024-04-24 21:41:34.130170] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.267 [2024-04-24 21:41:34.130188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.267 qpair failed and we were unable to recover it. 00:26:11.267 [2024-04-24 21:41:34.140028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.267 [2024-04-24 21:41:34.140153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.267 [2024-04-24 21:41:34.140172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.267 [2024-04-24 21:41:34.140182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.267 [2024-04-24 21:41:34.140190] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.267 [2024-04-24 21:41:34.140209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.267 qpair failed and we were unable to recover it. 00:26:11.267 [2024-04-24 21:41:34.150182] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.267 [2024-04-24 21:41:34.150308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.267 [2024-04-24 21:41:34.150327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.267 [2024-04-24 21:41:34.150337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.267 [2024-04-24 21:41:34.150345] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.267 [2024-04-24 21:41:34.150364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.267 qpair failed and we were unable to recover it. 00:26:11.526 [2024-04-24 21:41:34.160084] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.526 [2024-04-24 21:41:34.160211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.526 [2024-04-24 21:41:34.160230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.526 [2024-04-24 21:41:34.160240] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.526 [2024-04-24 21:41:34.160249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.526 [2024-04-24 21:41:34.160267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.526 qpair failed and we were unable to recover it. 00:26:11.526 [2024-04-24 21:41:34.170121] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.526 [2024-04-24 21:41:34.170258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.526 [2024-04-24 21:41:34.170277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.526 [2024-04-24 21:41:34.170287] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.526 [2024-04-24 21:41:34.170296] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.526 [2024-04-24 21:41:34.170314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.526 qpair failed and we were unable to recover it. 00:26:11.526 [2024-04-24 21:41:34.180152] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.526 [2024-04-24 21:41:34.180274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.526 [2024-04-24 21:41:34.180296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.526 [2024-04-24 21:41:34.180306] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.526 [2024-04-24 21:41:34.180314] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.526 [2024-04-24 21:41:34.180333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.526 qpair failed and we were unable to recover it. 00:26:11.526 [2024-04-24 21:41:34.190188] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.526 [2024-04-24 21:41:34.190308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.526 [2024-04-24 21:41:34.190327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.526 [2024-04-24 21:41:34.190337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.526 [2024-04-24 21:41:34.190346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.526 [2024-04-24 21:41:34.190364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.526 qpair failed and we were unable to recover it. 00:26:11.526 [2024-04-24 21:41:34.200213] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.526 [2024-04-24 21:41:34.200339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.526 [2024-04-24 21:41:34.200358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.526 [2024-04-24 21:41:34.200368] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.526 [2024-04-24 21:41:34.200377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.527 [2024-04-24 21:41:34.200395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.527 qpair failed and we were unable to recover it. 00:26:11.527 [2024-04-24 21:41:34.210197] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.527 [2024-04-24 21:41:34.210323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.527 [2024-04-24 21:41:34.210342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.527 [2024-04-24 21:41:34.210352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.527 [2024-04-24 21:41:34.210360] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.527 [2024-04-24 21:41:34.210379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.527 qpair failed and we were unable to recover it. 00:26:11.527 [2024-04-24 21:41:34.220263] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.527 [2024-04-24 21:41:34.220388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.527 [2024-04-24 21:41:34.220407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.527 [2024-04-24 21:41:34.220417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.527 [2024-04-24 21:41:34.220425] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.527 [2024-04-24 21:41:34.220444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.527 qpair failed and we were unable to recover it. 00:26:11.527 [2024-04-24 21:41:34.230310] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.527 [2024-04-24 21:41:34.230433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.527 [2024-04-24 21:41:34.230457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.527 [2024-04-24 21:41:34.230467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.527 [2024-04-24 21:41:34.230476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.527 [2024-04-24 21:41:34.230494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.527 qpair failed and we were unable to recover it. 00:26:11.527 [2024-04-24 21:41:34.240328] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.527 [2024-04-24 21:41:34.240472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.527 [2024-04-24 21:41:34.240491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.527 [2024-04-24 21:41:34.240500] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.527 [2024-04-24 21:41:34.240509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.527 [2024-04-24 21:41:34.240527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.527 qpair failed and we were unable to recover it. 00:26:11.527 [2024-04-24 21:41:34.250353] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.527 [2024-04-24 21:41:34.250489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.527 [2024-04-24 21:41:34.250507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.527 [2024-04-24 21:41:34.250518] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.527 [2024-04-24 21:41:34.250527] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.527 [2024-04-24 21:41:34.250544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.527 qpair failed and we were unable to recover it. 00:26:11.527 [2024-04-24 21:41:34.260353] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.527 [2024-04-24 21:41:34.260489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.527 [2024-04-24 21:41:34.260508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.527 [2024-04-24 21:41:34.260518] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.527 [2024-04-24 21:41:34.260526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.527 [2024-04-24 21:41:34.260544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.527 qpair failed and we were unable to recover it. 00:26:11.527 [2024-04-24 21:41:34.270411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.527 [2024-04-24 21:41:34.270543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.527 [2024-04-24 21:41:34.270564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.527 [2024-04-24 21:41:34.270574] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.527 [2024-04-24 21:41:34.270583] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.527 [2024-04-24 21:41:34.270601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.527 qpair failed and we were unable to recover it. 00:26:11.527 [2024-04-24 21:41:34.280367] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.527 [2024-04-24 21:41:34.280523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.527 [2024-04-24 21:41:34.280541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.527 [2024-04-24 21:41:34.280551] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.527 [2024-04-24 21:41:34.280560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.527 [2024-04-24 21:41:34.280578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.527 qpair failed and we were unable to recover it. 00:26:11.527 [2024-04-24 21:41:34.290459] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.527 [2024-04-24 21:41:34.290586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.527 [2024-04-24 21:41:34.290605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.527 [2024-04-24 21:41:34.290614] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.527 [2024-04-24 21:41:34.290623] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.527 [2024-04-24 21:41:34.290641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.527 qpair failed and we were unable to recover it. 00:26:11.527 [2024-04-24 21:41:34.300484] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.527 [2024-04-24 21:41:34.300603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.527 [2024-04-24 21:41:34.300621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.527 [2024-04-24 21:41:34.300631] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.527 [2024-04-24 21:41:34.300640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.527 [2024-04-24 21:41:34.300657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.527 qpair failed and we were unable to recover it. 00:26:11.527 [2024-04-24 21:41:34.310493] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.527 [2024-04-24 21:41:34.310651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.527 [2024-04-24 21:41:34.310670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.527 [2024-04-24 21:41:34.310680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.527 [2024-04-24 21:41:34.310688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.527 [2024-04-24 21:41:34.310710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.527 qpair failed and we were unable to recover it. 00:26:11.527 [2024-04-24 21:41:34.320495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.527 [2024-04-24 21:41:34.320619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.527 [2024-04-24 21:41:34.320637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.527 [2024-04-24 21:41:34.320648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.527 [2024-04-24 21:41:34.320657] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.527 [2024-04-24 21:41:34.320675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.527 qpair failed and we were unable to recover it. 00:26:11.527 [2024-04-24 21:41:34.330599] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.527 [2024-04-24 21:41:34.330723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.527 [2024-04-24 21:41:34.330741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.527 [2024-04-24 21:41:34.330752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.527 [2024-04-24 21:41:34.330761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.528 [2024-04-24 21:41:34.330779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.528 qpair failed and we were unable to recover it. 00:26:11.528 [2024-04-24 21:41:34.340559] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.528 [2024-04-24 21:41:34.340686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.528 [2024-04-24 21:41:34.340705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.528 [2024-04-24 21:41:34.340715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.528 [2024-04-24 21:41:34.340724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.528 [2024-04-24 21:41:34.340742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.528 qpair failed and we were unable to recover it. 00:26:11.528 [2024-04-24 21:41:34.350606] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.528 [2024-04-24 21:41:34.350758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.528 [2024-04-24 21:41:34.350778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.528 [2024-04-24 21:41:34.350788] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.528 [2024-04-24 21:41:34.350797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.528 [2024-04-24 21:41:34.350816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.528 qpair failed and we were unable to recover it. 00:26:11.528 [2024-04-24 21:41:34.360572] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.528 [2024-04-24 21:41:34.360704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.528 [2024-04-24 21:41:34.360728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.528 [2024-04-24 21:41:34.360738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.528 [2024-04-24 21:41:34.360746] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.528 [2024-04-24 21:41:34.360765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.528 qpair failed and we were unable to recover it. 00:26:11.528 [2024-04-24 21:41:34.370660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.528 [2024-04-24 21:41:34.370792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.528 [2024-04-24 21:41:34.370811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.528 [2024-04-24 21:41:34.370821] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.528 [2024-04-24 21:41:34.370830] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.528 [2024-04-24 21:41:34.370848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.528 qpair failed and we were unable to recover it. 00:26:11.528 [2024-04-24 21:41:34.380710] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.528 [2024-04-24 21:41:34.380835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.528 [2024-04-24 21:41:34.380854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.528 [2024-04-24 21:41:34.380864] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.528 [2024-04-24 21:41:34.380873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.528 [2024-04-24 21:41:34.380892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.528 qpair failed and we were unable to recover it. 00:26:11.528 [2024-04-24 21:41:34.390723] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.528 [2024-04-24 21:41:34.390849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.528 [2024-04-24 21:41:34.390870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.528 [2024-04-24 21:41:34.390880] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.528 [2024-04-24 21:41:34.390890] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.528 [2024-04-24 21:41:34.390908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.528 qpair failed and we were unable to recover it. 00:26:11.528 [2024-04-24 21:41:34.400771] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.528 [2024-04-24 21:41:34.400895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.528 [2024-04-24 21:41:34.400913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.528 [2024-04-24 21:41:34.400924] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.528 [2024-04-24 21:41:34.400932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.528 [2024-04-24 21:41:34.400953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.528 qpair failed and we were unable to recover it. 00:26:11.528 [2024-04-24 21:41:34.410785] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.528 [2024-04-24 21:41:34.410909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.528 [2024-04-24 21:41:34.410928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.528 [2024-04-24 21:41:34.410939] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.528 [2024-04-24 21:41:34.410948] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.528 [2024-04-24 21:41:34.410966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.528 qpair failed and we were unable to recover it. 00:26:11.788 [2024-04-24 21:41:34.420762] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.788 [2024-04-24 21:41:34.420883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.788 [2024-04-24 21:41:34.420903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.788 [2024-04-24 21:41:34.420913] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.788 [2024-04-24 21:41:34.420922] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.788 [2024-04-24 21:41:34.420941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.788 qpair failed and we were unable to recover it. 00:26:11.788 [2024-04-24 21:41:34.430792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.788 [2024-04-24 21:41:34.430916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.788 [2024-04-24 21:41:34.430935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.788 [2024-04-24 21:41:34.430945] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.788 [2024-04-24 21:41:34.430954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.788 [2024-04-24 21:41:34.430972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.788 qpair failed and we were unable to recover it. 00:26:11.788 [2024-04-24 21:41:34.440878] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.788 [2024-04-24 21:41:34.441022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.788 [2024-04-24 21:41:34.441040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.788 [2024-04-24 21:41:34.441050] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.788 [2024-04-24 21:41:34.441059] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.788 [2024-04-24 21:41:34.441077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.788 qpair failed and we were unable to recover it. 00:26:11.788 [2024-04-24 21:41:34.450906] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.788 [2024-04-24 21:41:34.451026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.788 [2024-04-24 21:41:34.451048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.788 [2024-04-24 21:41:34.451058] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.788 [2024-04-24 21:41:34.451066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.788 [2024-04-24 21:41:34.451085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.788 qpair failed and we were unable to recover it. 00:26:11.788 [2024-04-24 21:41:34.460927] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.788 [2024-04-24 21:41:34.461051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.788 [2024-04-24 21:41:34.461070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.788 [2024-04-24 21:41:34.461080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.788 [2024-04-24 21:41:34.461089] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.788 [2024-04-24 21:41:34.461107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.788 qpair failed and we were unable to recover it. 00:26:11.788 [2024-04-24 21:41:34.470933] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.788 [2024-04-24 21:41:34.471059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.788 [2024-04-24 21:41:34.471077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.788 [2024-04-24 21:41:34.471087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.788 [2024-04-24 21:41:34.471096] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.788 [2024-04-24 21:41:34.471114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.788 qpair failed and we were unable to recover it. 00:26:11.788 [2024-04-24 21:41:34.480918] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.788 [2024-04-24 21:41:34.481046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.788 [2024-04-24 21:41:34.481066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.788 [2024-04-24 21:41:34.481077] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.788 [2024-04-24 21:41:34.481086] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.788 [2024-04-24 21:41:34.481105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.788 qpair failed and we were unable to recover it. 00:26:11.788 [2024-04-24 21:41:34.490943] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.788 [2024-04-24 21:41:34.491070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.788 [2024-04-24 21:41:34.491089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.788 [2024-04-24 21:41:34.491099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.788 [2024-04-24 21:41:34.491111] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.788 [2024-04-24 21:41:34.491130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.788 qpair failed and we were unable to recover it. 00:26:11.788 [2024-04-24 21:41:34.500966] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.788 [2024-04-24 21:41:34.501099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.788 [2024-04-24 21:41:34.501119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.788 [2024-04-24 21:41:34.501128] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.788 [2024-04-24 21:41:34.501137] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.789 [2024-04-24 21:41:34.501155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.789 qpair failed and we were unable to recover it. 00:26:11.789 [2024-04-24 21:41:34.510994] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.789 [2024-04-24 21:41:34.511119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.789 [2024-04-24 21:41:34.511138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.789 [2024-04-24 21:41:34.511148] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.789 [2024-04-24 21:41:34.511156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.789 [2024-04-24 21:41:34.511175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.789 qpair failed and we were unable to recover it. 00:26:11.789 [2024-04-24 21:41:34.521025] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.789 [2024-04-24 21:41:34.521186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.789 [2024-04-24 21:41:34.521205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.789 [2024-04-24 21:41:34.521215] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.789 [2024-04-24 21:41:34.521223] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.789 [2024-04-24 21:41:34.521241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.789 qpair failed and we were unable to recover it. 00:26:11.789 [2024-04-24 21:41:34.531131] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.789 [2024-04-24 21:41:34.531254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.789 [2024-04-24 21:41:34.531273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.789 [2024-04-24 21:41:34.531282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.789 [2024-04-24 21:41:34.531291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.789 [2024-04-24 21:41:34.531310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.789 qpair failed and we were unable to recover it. 00:26:11.789 [2024-04-24 21:41:34.541198] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.789 [2024-04-24 21:41:34.541339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.789 [2024-04-24 21:41:34.541358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.789 [2024-04-24 21:41:34.541368] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.789 [2024-04-24 21:41:34.541377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.789 [2024-04-24 21:41:34.541395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.789 qpair failed and we were unable to recover it. 00:26:11.789 [2024-04-24 21:41:34.551224] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.789 [2024-04-24 21:41:34.551346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.789 [2024-04-24 21:41:34.551365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.789 [2024-04-24 21:41:34.551375] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.789 [2024-04-24 21:41:34.551383] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.789 [2024-04-24 21:41:34.551402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.789 qpair failed and we were unable to recover it. 00:26:11.789 [2024-04-24 21:41:34.561179] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.789 [2024-04-24 21:41:34.561306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.789 [2024-04-24 21:41:34.561325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.789 [2024-04-24 21:41:34.561335] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.789 [2024-04-24 21:41:34.561343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.789 [2024-04-24 21:41:34.561362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.789 qpair failed and we were unable to recover it. 00:26:11.789 [2024-04-24 21:41:34.571252] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.789 [2024-04-24 21:41:34.571569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.789 [2024-04-24 21:41:34.571589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.789 [2024-04-24 21:41:34.571600] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.789 [2024-04-24 21:41:34.571608] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.789 [2024-04-24 21:41:34.571627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.789 qpair failed and we were unable to recover it. 00:26:11.789 [2024-04-24 21:41:34.581271] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.789 [2024-04-24 21:41:34.581396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.789 [2024-04-24 21:41:34.581415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.789 [2024-04-24 21:41:34.581425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.789 [2024-04-24 21:41:34.581437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.789 [2024-04-24 21:41:34.581460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.789 qpair failed and we were unable to recover it. 00:26:11.789 [2024-04-24 21:41:34.591294] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.789 [2024-04-24 21:41:34.591418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.789 [2024-04-24 21:41:34.591436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.789 [2024-04-24 21:41:34.591446] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.789 [2024-04-24 21:41:34.591462] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.789 [2024-04-24 21:41:34.591481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.789 qpair failed and we were unable to recover it. 00:26:11.789 [2024-04-24 21:41:34.601297] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.789 [2024-04-24 21:41:34.601423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.789 [2024-04-24 21:41:34.601442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.789 [2024-04-24 21:41:34.601459] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.789 [2024-04-24 21:41:34.601469] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.789 [2024-04-24 21:41:34.601487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.789 qpair failed and we were unable to recover it. 00:26:11.789 [2024-04-24 21:41:34.611347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.789 [2024-04-24 21:41:34.611482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.789 [2024-04-24 21:41:34.611500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.789 [2024-04-24 21:41:34.611510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.789 [2024-04-24 21:41:34.611519] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.789 [2024-04-24 21:41:34.611537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.789 qpair failed and we were unable to recover it. 00:26:11.789 [2024-04-24 21:41:34.621371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.789 [2024-04-24 21:41:34.621497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.789 [2024-04-24 21:41:34.621516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.789 [2024-04-24 21:41:34.621526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.789 [2024-04-24 21:41:34.621535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.789 [2024-04-24 21:41:34.621554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.789 qpair failed and we were unable to recover it. 00:26:11.789 [2024-04-24 21:41:34.631524] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.789 [2024-04-24 21:41:34.631666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.789 [2024-04-24 21:41:34.631685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.789 [2024-04-24 21:41:34.631695] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.789 [2024-04-24 21:41:34.631703] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.789 [2024-04-24 21:41:34.631722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.789 qpair failed and we were unable to recover it. 00:26:11.789 [2024-04-24 21:41:34.641483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.790 [2024-04-24 21:41:34.641607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.790 [2024-04-24 21:41:34.641626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.790 [2024-04-24 21:41:34.641636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.790 [2024-04-24 21:41:34.641644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.790 [2024-04-24 21:41:34.641662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.790 qpair failed and we were unable to recover it. 00:26:11.790 [2024-04-24 21:41:34.651506] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.790 [2024-04-24 21:41:34.651645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.790 [2024-04-24 21:41:34.651664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.790 [2024-04-24 21:41:34.651675] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.790 [2024-04-24 21:41:34.651684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.790 [2024-04-24 21:41:34.651702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.790 qpair failed and we were unable to recover it. 00:26:11.790 [2024-04-24 21:41:34.661541] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.790 [2024-04-24 21:41:34.661894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.790 [2024-04-24 21:41:34.661914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.790 [2024-04-24 21:41:34.661924] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.790 [2024-04-24 21:41:34.661932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.790 [2024-04-24 21:41:34.661951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.790 qpair failed and we were unable to recover it. 00:26:11.790 [2024-04-24 21:41:34.671538] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:11.790 [2024-04-24 21:41:34.671662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:11.790 [2024-04-24 21:41:34.671681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:11.790 [2024-04-24 21:41:34.671691] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:11.790 [2024-04-24 21:41:34.671703] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:11.790 [2024-04-24 21:41:34.671722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:11.790 qpair failed and we were unable to recover it. 00:26:12.049 [2024-04-24 21:41:34.681549] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.049 [2024-04-24 21:41:34.681671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.049 [2024-04-24 21:41:34.681690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.049 [2024-04-24 21:41:34.681700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.049 [2024-04-24 21:41:34.681709] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.049 [2024-04-24 21:41:34.681727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.049 qpair failed and we were unable to recover it. 00:26:12.049 [2024-04-24 21:41:34.691579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.049 [2024-04-24 21:41:34.691704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.049 [2024-04-24 21:41:34.691723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.049 [2024-04-24 21:41:34.691733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.049 [2024-04-24 21:41:34.691742] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.049 [2024-04-24 21:41:34.691760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.049 qpair failed and we were unable to recover it. 00:26:12.049 [2024-04-24 21:41:34.701615] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.049 [2024-04-24 21:41:34.701741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.049 [2024-04-24 21:41:34.701760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.049 [2024-04-24 21:41:34.701770] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.049 [2024-04-24 21:41:34.701779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.049 [2024-04-24 21:41:34.701798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.049 qpair failed and we were unable to recover it. 00:26:12.049 [2024-04-24 21:41:34.711620] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.049 [2024-04-24 21:41:34.711747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.049 [2024-04-24 21:41:34.711766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.049 [2024-04-24 21:41:34.711776] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.049 [2024-04-24 21:41:34.711785] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.049 [2024-04-24 21:41:34.711803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.049 qpair failed and we were unable to recover it. 00:26:12.049 [2024-04-24 21:41:34.721662] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.049 [2024-04-24 21:41:34.721787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.049 [2024-04-24 21:41:34.721806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.049 [2024-04-24 21:41:34.721815] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.049 [2024-04-24 21:41:34.721824] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.049 [2024-04-24 21:41:34.721842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.049 qpair failed and we were unable to recover it. 00:26:12.049 [2024-04-24 21:41:34.731677] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.049 [2024-04-24 21:41:34.731805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.049 [2024-04-24 21:41:34.731824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.049 [2024-04-24 21:41:34.731834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.049 [2024-04-24 21:41:34.731843] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.049 [2024-04-24 21:41:34.731861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.049 qpair failed and we were unable to recover it. 00:26:12.049 [2024-04-24 21:41:34.741657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.049 [2024-04-24 21:41:34.741784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.049 [2024-04-24 21:41:34.741803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.049 [2024-04-24 21:41:34.741813] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.049 [2024-04-24 21:41:34.741821] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.049 [2024-04-24 21:41:34.741839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.049 qpair failed and we were unable to recover it. 00:26:12.050 [2024-04-24 21:41:34.751762] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.050 [2024-04-24 21:41:34.751890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.050 [2024-04-24 21:41:34.751909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.050 [2024-04-24 21:41:34.751919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.050 [2024-04-24 21:41:34.751927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.050 [2024-04-24 21:41:34.751945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.050 qpair failed and we were unable to recover it. 00:26:12.050 [2024-04-24 21:41:34.761731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.050 [2024-04-24 21:41:34.761894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.050 [2024-04-24 21:41:34.761913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.050 [2024-04-24 21:41:34.761926] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.050 [2024-04-24 21:41:34.761935] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.050 [2024-04-24 21:41:34.761954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.050 qpair failed and we were unable to recover it. 00:26:12.050 [2024-04-24 21:41:34.771785] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.050 [2024-04-24 21:41:34.771914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.050 [2024-04-24 21:41:34.771933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.050 [2024-04-24 21:41:34.771943] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.050 [2024-04-24 21:41:34.771952] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.050 [2024-04-24 21:41:34.771970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.050 qpair failed and we were unable to recover it. 00:26:12.050 [2024-04-24 21:41:34.781868] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.050 [2024-04-24 21:41:34.781991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.050 [2024-04-24 21:41:34.782010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.050 [2024-04-24 21:41:34.782020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.050 [2024-04-24 21:41:34.782029] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.050 [2024-04-24 21:41:34.782047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.050 qpair failed and we were unable to recover it. 00:26:12.050 [2024-04-24 21:41:34.791854] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.050 [2024-04-24 21:41:34.791979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.050 [2024-04-24 21:41:34.791997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.050 [2024-04-24 21:41:34.792008] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.050 [2024-04-24 21:41:34.792017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.050 [2024-04-24 21:41:34.792036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.050 qpair failed and we were unable to recover it. 00:26:12.050 [2024-04-24 21:41:34.801869] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.050 [2024-04-24 21:41:34.801993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.050 [2024-04-24 21:41:34.802013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.050 [2024-04-24 21:41:34.802023] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.050 [2024-04-24 21:41:34.802032] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.050 [2024-04-24 21:41:34.802049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.050 qpair failed and we were unable to recover it. 00:26:12.050 [2024-04-24 21:41:34.811929] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.050 [2024-04-24 21:41:34.812251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.050 [2024-04-24 21:41:34.812271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.050 [2024-04-24 21:41:34.812281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.050 [2024-04-24 21:41:34.812290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.050 [2024-04-24 21:41:34.812308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.050 qpair failed and we were unable to recover it. 00:26:12.050 [2024-04-24 21:41:34.821986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.050 [2024-04-24 21:41:34.822131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.050 [2024-04-24 21:41:34.822150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.050 [2024-04-24 21:41:34.822160] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.050 [2024-04-24 21:41:34.822169] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.050 [2024-04-24 21:41:34.822187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.050 qpair failed and we were unable to recover it. 00:26:12.050 [2024-04-24 21:41:34.831903] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.050 [2024-04-24 21:41:34.832029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.050 [2024-04-24 21:41:34.832048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.050 [2024-04-24 21:41:34.832057] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.050 [2024-04-24 21:41:34.832066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.050 [2024-04-24 21:41:34.832084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.050 qpair failed and we were unable to recover it. 00:26:12.050 [2024-04-24 21:41:34.841955] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.050 [2024-04-24 21:41:34.842281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.050 [2024-04-24 21:41:34.842301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.050 [2024-04-24 21:41:34.842311] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.050 [2024-04-24 21:41:34.842320] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.050 [2024-04-24 21:41:34.842339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.050 qpair failed and we were unable to recover it. 00:26:12.050 [2024-04-24 21:41:34.851967] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.050 [2024-04-24 21:41:34.852090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.050 [2024-04-24 21:41:34.852109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.050 [2024-04-24 21:41:34.852122] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.050 [2024-04-24 21:41:34.852131] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.050 [2024-04-24 21:41:34.852150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.050 qpair failed and we were unable to recover it. 00:26:12.050 [2024-04-24 21:41:34.861977] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.050 [2024-04-24 21:41:34.862150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.050 [2024-04-24 21:41:34.862168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.050 [2024-04-24 21:41:34.862178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.050 [2024-04-24 21:41:34.862187] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.050 [2024-04-24 21:41:34.862206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.050 qpair failed and we were unable to recover it. 00:26:12.050 [2024-04-24 21:41:34.872023] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.050 [2024-04-24 21:41:34.872176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.050 [2024-04-24 21:41:34.872195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.050 [2024-04-24 21:41:34.872205] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.050 [2024-04-24 21:41:34.872214] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.050 [2024-04-24 21:41:34.872232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.050 qpair failed and we were unable to recover it. 00:26:12.050 [2024-04-24 21:41:34.882166] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.050 [2024-04-24 21:41:34.882333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.050 [2024-04-24 21:41:34.882351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.051 [2024-04-24 21:41:34.882361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.051 [2024-04-24 21:41:34.882370] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.051 [2024-04-24 21:41:34.882389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.051 qpair failed and we were unable to recover it. 00:26:12.051 [2024-04-24 21:41:34.892185] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.051 [2024-04-24 21:41:34.892309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.051 [2024-04-24 21:41:34.892328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.051 [2024-04-24 21:41:34.892338] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.051 [2024-04-24 21:41:34.892346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.051 [2024-04-24 21:41:34.892366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.051 qpair failed and we were unable to recover it. 00:26:12.051 [2024-04-24 21:41:34.902194] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.051 [2024-04-24 21:41:34.902342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.051 [2024-04-24 21:41:34.902361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.051 [2024-04-24 21:41:34.902370] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.051 [2024-04-24 21:41:34.902379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.051 [2024-04-24 21:41:34.902396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.051 qpair failed and we were unable to recover it. 00:26:12.051 [2024-04-24 21:41:34.912245] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.051 [2024-04-24 21:41:34.912372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.051 [2024-04-24 21:41:34.912391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.051 [2024-04-24 21:41:34.912401] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.051 [2024-04-24 21:41:34.912410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.051 [2024-04-24 21:41:34.912428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.051 qpair failed and we were unable to recover it. 00:26:12.051 [2024-04-24 21:41:34.922204] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.051 [2024-04-24 21:41:34.922331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.051 [2024-04-24 21:41:34.922350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.051 [2024-04-24 21:41:34.922360] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.051 [2024-04-24 21:41:34.922369] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.051 [2024-04-24 21:41:34.922387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.051 qpair failed and we were unable to recover it. 00:26:12.051 [2024-04-24 21:41:34.932272] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.051 [2024-04-24 21:41:34.932417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.051 [2024-04-24 21:41:34.932435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.051 [2024-04-24 21:41:34.932445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.051 [2024-04-24 21:41:34.932458] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.051 [2024-04-24 21:41:34.932477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.051 qpair failed and we were unable to recover it. 00:26:12.310 [2024-04-24 21:41:34.942263] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.310 [2024-04-24 21:41:34.942424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.310 [2024-04-24 21:41:34.942443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.310 [2024-04-24 21:41:34.942461] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.310 [2024-04-24 21:41:34.942470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.310 [2024-04-24 21:41:34.942488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-04-24 21:41:34.952338] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.310 [2024-04-24 21:41:34.952482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.310 [2024-04-24 21:41:34.952501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.310 [2024-04-24 21:41:34.952511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.310 [2024-04-24 21:41:34.952519] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.310 [2024-04-24 21:41:34.952538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-04-24 21:41:34.962330] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.310 [2024-04-24 21:41:34.962461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.310 [2024-04-24 21:41:34.962481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.310 [2024-04-24 21:41:34.962491] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.310 [2024-04-24 21:41:34.962500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.310 [2024-04-24 21:41:34.962519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-04-24 21:41:34.972362] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.310 [2024-04-24 21:41:34.972493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.310 [2024-04-24 21:41:34.972511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.310 [2024-04-24 21:41:34.972521] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.310 [2024-04-24 21:41:34.972530] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.310 [2024-04-24 21:41:34.972548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-04-24 21:41:34.982389] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.310 [2024-04-24 21:41:34.982518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.310 [2024-04-24 21:41:34.982537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.310 [2024-04-24 21:41:34.982547] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.310 [2024-04-24 21:41:34.982555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.310 [2024-04-24 21:41:34.982573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-04-24 21:41:34.992434] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.310 [2024-04-24 21:41:34.992575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.310 [2024-04-24 21:41:34.992594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.310 [2024-04-24 21:41:34.992604] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.310 [2024-04-24 21:41:34.992612] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.310 [2024-04-24 21:41:34.992631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-04-24 21:41:35.002436] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.310 [2024-04-24 21:41:35.002564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.310 [2024-04-24 21:41:35.002584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.310 [2024-04-24 21:41:35.002594] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.310 [2024-04-24 21:41:35.002603] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.310 [2024-04-24 21:41:35.002621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-04-24 21:41:35.012512] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.310 [2024-04-24 21:41:35.012768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.310 [2024-04-24 21:41:35.012788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.310 [2024-04-24 21:41:35.012798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.310 [2024-04-24 21:41:35.012807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.310 [2024-04-24 21:41:35.012826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-04-24 21:41:35.022529] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.310 [2024-04-24 21:41:35.022677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.310 [2024-04-24 21:41:35.022695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.310 [2024-04-24 21:41:35.022705] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.310 [2024-04-24 21:41:35.022714] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.310 [2024-04-24 21:41:35.022733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.310 [2024-04-24 21:41:35.032536] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.310 [2024-04-24 21:41:35.032686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.310 [2024-04-24 21:41:35.032708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.310 [2024-04-24 21:41:35.032718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.310 [2024-04-24 21:41:35.032727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.310 [2024-04-24 21:41:35.032746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.310 qpair failed and we were unable to recover it. 00:26:12.311 [2024-04-24 21:41:35.042553] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.311 [2024-04-24 21:41:35.042678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.311 [2024-04-24 21:41:35.042696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.311 [2024-04-24 21:41:35.042706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.311 [2024-04-24 21:41:35.042715] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.311 [2024-04-24 21:41:35.042733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-04-24 21:41:35.052587] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.311 [2024-04-24 21:41:35.052713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.311 [2024-04-24 21:41:35.052732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.311 [2024-04-24 21:41:35.052743] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.311 [2024-04-24 21:41:35.052751] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.311 [2024-04-24 21:41:35.052769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-04-24 21:41:35.062628] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.311 [2024-04-24 21:41:35.062766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.311 [2024-04-24 21:41:35.062785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.311 [2024-04-24 21:41:35.062796] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.311 [2024-04-24 21:41:35.062805] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.311 [2024-04-24 21:41:35.062823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-04-24 21:41:35.072639] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.311 [2024-04-24 21:41:35.072763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.311 [2024-04-24 21:41:35.072781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.311 [2024-04-24 21:41:35.072792] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.311 [2024-04-24 21:41:35.072800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.311 [2024-04-24 21:41:35.072822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-04-24 21:41:35.082667] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.311 [2024-04-24 21:41:35.082793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.311 [2024-04-24 21:41:35.082812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.311 [2024-04-24 21:41:35.082822] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.311 [2024-04-24 21:41:35.082831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.311 [2024-04-24 21:41:35.082849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-04-24 21:41:35.092702] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.311 [2024-04-24 21:41:35.092830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.311 [2024-04-24 21:41:35.092849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.311 [2024-04-24 21:41:35.092859] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.311 [2024-04-24 21:41:35.092868] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.311 [2024-04-24 21:41:35.092887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-04-24 21:41:35.102772] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.311 [2024-04-24 21:41:35.102936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.311 [2024-04-24 21:41:35.102955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.311 [2024-04-24 21:41:35.102965] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.311 [2024-04-24 21:41:35.102973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.311 [2024-04-24 21:41:35.102993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-04-24 21:41:35.112775] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.311 [2024-04-24 21:41:35.112899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.311 [2024-04-24 21:41:35.112917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.311 [2024-04-24 21:41:35.112927] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.311 [2024-04-24 21:41:35.112936] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.311 [2024-04-24 21:41:35.112954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-04-24 21:41:35.122782] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.311 [2024-04-24 21:41:35.122906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.311 [2024-04-24 21:41:35.122928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.311 [2024-04-24 21:41:35.122939] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.311 [2024-04-24 21:41:35.122948] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.311 [2024-04-24 21:41:35.122966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-04-24 21:41:35.132821] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.311 [2024-04-24 21:41:35.132939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.311 [2024-04-24 21:41:35.132958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.311 [2024-04-24 21:41:35.132968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.311 [2024-04-24 21:41:35.132977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.311 [2024-04-24 21:41:35.132995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-04-24 21:41:35.142835] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.311 [2024-04-24 21:41:35.142957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.311 [2024-04-24 21:41:35.142976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.311 [2024-04-24 21:41:35.142986] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.311 [2024-04-24 21:41:35.142994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.311 [2024-04-24 21:41:35.143013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-04-24 21:41:35.152807] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.311 [2024-04-24 21:41:35.152932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.311 [2024-04-24 21:41:35.152950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.311 [2024-04-24 21:41:35.152960] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.311 [2024-04-24 21:41:35.152968] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.311 [2024-04-24 21:41:35.152987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-04-24 21:41:35.162895] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.311 [2024-04-24 21:41:35.163022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.311 [2024-04-24 21:41:35.163041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.311 [2024-04-24 21:41:35.163051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.311 [2024-04-24 21:41:35.163059] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.311 [2024-04-24 21:41:35.163080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.311 qpair failed and we were unable to recover it. 00:26:12.311 [2024-04-24 21:41:35.172927] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.311 [2024-04-24 21:41:35.173052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.312 [2024-04-24 21:41:35.173071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.312 [2024-04-24 21:41:35.173081] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.312 [2024-04-24 21:41:35.173089] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.312 [2024-04-24 21:41:35.173108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-04-24 21:41:35.182945] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.312 [2024-04-24 21:41:35.183069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.312 [2024-04-24 21:41:35.183088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.312 [2024-04-24 21:41:35.183098] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.312 [2024-04-24 21:41:35.183106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.312 [2024-04-24 21:41:35.183124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.312 [2024-04-24 21:41:35.192981] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.312 [2024-04-24 21:41:35.193106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.312 [2024-04-24 21:41:35.193125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.312 [2024-04-24 21:41:35.193135] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.312 [2024-04-24 21:41:35.193143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.312 [2024-04-24 21:41:35.193161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.312 qpair failed and we were unable to recover it. 00:26:12.571 [2024-04-24 21:41:35.202973] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.571 [2024-04-24 21:41:35.203098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.571 [2024-04-24 21:41:35.203117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.571 [2024-04-24 21:41:35.203127] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.571 [2024-04-24 21:41:35.203135] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.571 [2024-04-24 21:41:35.203153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.571 qpair failed and we were unable to recover it. 00:26:12.571 [2024-04-24 21:41:35.213027] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.571 [2024-04-24 21:41:35.213149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.571 [2024-04-24 21:41:35.213171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.571 [2024-04-24 21:41:35.213182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.571 [2024-04-24 21:41:35.213190] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.571 [2024-04-24 21:41:35.213209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.571 qpair failed and we were unable to recover it. 00:26:12.571 [2024-04-24 21:41:35.223056] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.571 [2024-04-24 21:41:35.223377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.571 [2024-04-24 21:41:35.223397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.571 [2024-04-24 21:41:35.223407] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.571 [2024-04-24 21:41:35.223416] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.571 [2024-04-24 21:41:35.223434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.571 qpair failed and we were unable to recover it. 00:26:12.571 [2024-04-24 21:41:35.233081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.571 [2024-04-24 21:41:35.233205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.571 [2024-04-24 21:41:35.233223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.571 [2024-04-24 21:41:35.233233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.571 [2024-04-24 21:41:35.233242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.571 [2024-04-24 21:41:35.233260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.571 qpair failed and we were unable to recover it. 00:26:12.571 [2024-04-24 21:41:35.243107] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.571 [2024-04-24 21:41:35.243237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.571 [2024-04-24 21:41:35.243256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.571 [2024-04-24 21:41:35.243266] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.571 [2024-04-24 21:41:35.243274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.571 [2024-04-24 21:41:35.243293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.571 qpair failed and we were unable to recover it. 00:26:12.571 [2024-04-24 21:41:35.253137] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.571 [2024-04-24 21:41:35.253262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.571 [2024-04-24 21:41:35.253281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.571 [2024-04-24 21:41:35.253291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.571 [2024-04-24 21:41:35.253300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.571 [2024-04-24 21:41:35.253321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.571 qpair failed and we were unable to recover it. 00:26:12.571 [2024-04-24 21:41:35.263137] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.571 [2024-04-24 21:41:35.263264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.571 [2024-04-24 21:41:35.263283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.571 [2024-04-24 21:41:35.263293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.572 [2024-04-24 21:41:35.263301] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.572 [2024-04-24 21:41:35.263320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.572 qpair failed and we were unable to recover it. 00:26:12.572 [2024-04-24 21:41:35.273199] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.572 [2024-04-24 21:41:35.273324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.572 [2024-04-24 21:41:35.273342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.572 [2024-04-24 21:41:35.273352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.572 [2024-04-24 21:41:35.273361] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.572 [2024-04-24 21:41:35.273379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.572 qpair failed and we were unable to recover it. 00:26:12.572 [2024-04-24 21:41:35.283199] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.572 [2024-04-24 21:41:35.283350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.572 [2024-04-24 21:41:35.283369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.572 [2024-04-24 21:41:35.283379] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.572 [2024-04-24 21:41:35.283387] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.572 [2024-04-24 21:41:35.283405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.572 qpair failed and we were unable to recover it. 00:26:12.572 [2024-04-24 21:41:35.293247] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.572 [2024-04-24 21:41:35.293394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.572 [2024-04-24 21:41:35.293412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.572 [2024-04-24 21:41:35.293422] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.572 [2024-04-24 21:41:35.293431] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.572 [2024-04-24 21:41:35.293454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.572 qpair failed and we were unable to recover it. 00:26:12.572 [2024-04-24 21:41:35.303278] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.572 [2024-04-24 21:41:35.303406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.572 [2024-04-24 21:41:35.303428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.572 [2024-04-24 21:41:35.303438] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.572 [2024-04-24 21:41:35.303446] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.572 [2024-04-24 21:41:35.303470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.572 qpair failed and we were unable to recover it. 00:26:12.572 [2024-04-24 21:41:35.313310] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.572 [2024-04-24 21:41:35.313438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.572 [2024-04-24 21:41:35.313461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.572 [2024-04-24 21:41:35.313472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.572 [2024-04-24 21:41:35.313480] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.572 [2024-04-24 21:41:35.313498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.572 qpair failed and we were unable to recover it. 00:26:12.572 [2024-04-24 21:41:35.323280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.572 [2024-04-24 21:41:35.323449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.572 [2024-04-24 21:41:35.323471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.572 [2024-04-24 21:41:35.323480] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.572 [2024-04-24 21:41:35.323489] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.572 [2024-04-24 21:41:35.323507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.572 qpair failed and we were unable to recover it. 00:26:12.572 [2024-04-24 21:41:35.333356] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.572 [2024-04-24 21:41:35.333484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.572 [2024-04-24 21:41:35.333503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.572 [2024-04-24 21:41:35.333513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.572 [2024-04-24 21:41:35.333522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.572 [2024-04-24 21:41:35.333541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.572 qpair failed and we were unable to recover it. 00:26:12.572 [2024-04-24 21:41:35.343360] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.572 [2024-04-24 21:41:35.343484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.572 [2024-04-24 21:41:35.343503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.572 [2024-04-24 21:41:35.343513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.572 [2024-04-24 21:41:35.343524] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.572 [2024-04-24 21:41:35.343543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.572 qpair failed and we were unable to recover it. 00:26:12.572 [2024-04-24 21:41:35.353416] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.572 [2024-04-24 21:41:35.353541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.572 [2024-04-24 21:41:35.353560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.572 [2024-04-24 21:41:35.353570] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.572 [2024-04-24 21:41:35.353579] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.572 [2024-04-24 21:41:35.353597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.572 qpair failed and we were unable to recover it. 00:26:12.572 [2024-04-24 21:41:35.363611] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.572 [2024-04-24 21:41:35.363777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.572 [2024-04-24 21:41:35.363796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.572 [2024-04-24 21:41:35.363805] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.572 [2024-04-24 21:41:35.363814] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.572 [2024-04-24 21:41:35.363833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.572 qpair failed and we were unable to recover it. 00:26:12.572 [2024-04-24 21:41:35.373464] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.572 [2024-04-24 21:41:35.373588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.572 [2024-04-24 21:41:35.373607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.572 [2024-04-24 21:41:35.373617] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.572 [2024-04-24 21:41:35.373626] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.572 [2024-04-24 21:41:35.373645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.572 qpair failed and we were unable to recover it. 00:26:12.572 [2024-04-24 21:41:35.383491] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.572 [2024-04-24 21:41:35.383618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.572 [2024-04-24 21:41:35.383637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.572 [2024-04-24 21:41:35.383647] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.572 [2024-04-24 21:41:35.383656] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.572 [2024-04-24 21:41:35.383674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.572 qpair failed and we were unable to recover it. 00:26:12.573 [2024-04-24 21:41:35.393539] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.573 [2024-04-24 21:41:35.393668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.573 [2024-04-24 21:41:35.393687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.573 [2024-04-24 21:41:35.393697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.573 [2024-04-24 21:41:35.393706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.573 [2024-04-24 21:41:35.393724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.573 qpair failed and we were unable to recover it. 00:26:12.573 [2024-04-24 21:41:35.403540] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.573 [2024-04-24 21:41:35.403675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.573 [2024-04-24 21:41:35.403695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.573 [2024-04-24 21:41:35.403705] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.573 [2024-04-24 21:41:35.403714] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.573 [2024-04-24 21:41:35.403732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.573 qpair failed and we were unable to recover it. 00:26:12.573 [2024-04-24 21:41:35.413580] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.573 [2024-04-24 21:41:35.413726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.573 [2024-04-24 21:41:35.413745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.573 [2024-04-24 21:41:35.413755] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.573 [2024-04-24 21:41:35.413764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.573 [2024-04-24 21:41:35.413783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.573 qpair failed and we were unable to recover it. 00:26:12.573 [2024-04-24 21:41:35.423579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.573 [2024-04-24 21:41:35.423715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.573 [2024-04-24 21:41:35.423734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.573 [2024-04-24 21:41:35.423744] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.573 [2024-04-24 21:41:35.423753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.573 [2024-04-24 21:41:35.423772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.573 qpair failed and we were unable to recover it. 00:26:12.573 [2024-04-24 21:41:35.433636] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.573 [2024-04-24 21:41:35.433757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.573 [2024-04-24 21:41:35.433777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.573 [2024-04-24 21:41:35.433787] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.573 [2024-04-24 21:41:35.433798] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.573 [2024-04-24 21:41:35.433818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.573 qpair failed and we were unable to recover it. 00:26:12.573 [2024-04-24 21:41:35.443669] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.573 [2024-04-24 21:41:35.443800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.573 [2024-04-24 21:41:35.443819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.573 [2024-04-24 21:41:35.443829] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.573 [2024-04-24 21:41:35.443838] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.573 [2024-04-24 21:41:35.443857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.573 qpair failed and we were unable to recover it. 00:26:12.573 [2024-04-24 21:41:35.453706] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.573 [2024-04-24 21:41:35.453831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.573 [2024-04-24 21:41:35.453851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.573 [2024-04-24 21:41:35.453861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.573 [2024-04-24 21:41:35.453870] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.573 [2024-04-24 21:41:35.453889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.573 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-24 21:41:35.463779] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.833 [2024-04-24 21:41:35.463902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.833 [2024-04-24 21:41:35.463921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.833 [2024-04-24 21:41:35.463931] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.833 [2024-04-24 21:41:35.463940] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.833 [2024-04-24 21:41:35.463958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-24 21:41:35.473765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.833 [2024-04-24 21:41:35.473910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.833 [2024-04-24 21:41:35.473930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.833 [2024-04-24 21:41:35.473941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.833 [2024-04-24 21:41:35.473950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.833 [2024-04-24 21:41:35.473969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-24 21:41:35.483766] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.833 [2024-04-24 21:41:35.483894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.833 [2024-04-24 21:41:35.483913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.833 [2024-04-24 21:41:35.483923] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.833 [2024-04-24 21:41:35.483931] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.833 [2024-04-24 21:41:35.483949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-24 21:41:35.493806] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.833 [2024-04-24 21:41:35.493927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.833 [2024-04-24 21:41:35.493945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.833 [2024-04-24 21:41:35.493955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.833 [2024-04-24 21:41:35.493964] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.833 [2024-04-24 21:41:35.493982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-24 21:41:35.503845] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.833 [2024-04-24 21:41:35.503964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.833 [2024-04-24 21:41:35.503984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.833 [2024-04-24 21:41:35.503994] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.833 [2024-04-24 21:41:35.504002] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.833 [2024-04-24 21:41:35.504021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-24 21:41:35.513915] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.833 [2024-04-24 21:41:35.514050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.833 [2024-04-24 21:41:35.514069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.833 [2024-04-24 21:41:35.514079] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.833 [2024-04-24 21:41:35.514087] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.833 [2024-04-24 21:41:35.514106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-24 21:41:35.523897] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.833 [2024-04-24 21:41:35.524019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.833 [2024-04-24 21:41:35.524038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.833 [2024-04-24 21:41:35.524051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.833 [2024-04-24 21:41:35.524060] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.833 [2024-04-24 21:41:35.524078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-24 21:41:35.533921] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.833 [2024-04-24 21:41:35.534240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.833 [2024-04-24 21:41:35.534260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.833 [2024-04-24 21:41:35.534270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.833 [2024-04-24 21:41:35.534279] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.833 [2024-04-24 21:41:35.534298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-24 21:41:35.543986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.833 [2024-04-24 21:41:35.544127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.833 [2024-04-24 21:41:35.544145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.833 [2024-04-24 21:41:35.544155] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.833 [2024-04-24 21:41:35.544164] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.833 [2024-04-24 21:41:35.544182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-24 21:41:35.553985] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.833 [2024-04-24 21:41:35.554107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.833 [2024-04-24 21:41:35.554126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.833 [2024-04-24 21:41:35.554136] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.833 [2024-04-24 21:41:35.554144] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.833 [2024-04-24 21:41:35.554162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-24 21:41:35.564007] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.833 [2024-04-24 21:41:35.564129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.833 [2024-04-24 21:41:35.564148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.833 [2024-04-24 21:41:35.564158] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.833 [2024-04-24 21:41:35.564167] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.833 [2024-04-24 21:41:35.564185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.833 [2024-04-24 21:41:35.574025] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.833 [2024-04-24 21:41:35.574148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.833 [2024-04-24 21:41:35.574167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.833 [2024-04-24 21:41:35.574177] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.833 [2024-04-24 21:41:35.574186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.833 [2024-04-24 21:41:35.574204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.833 qpair failed and we were unable to recover it. 00:26:12.834 [2024-04-24 21:41:35.584063] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.834 [2024-04-24 21:41:35.584189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.834 [2024-04-24 21:41:35.584209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.834 [2024-04-24 21:41:35.584219] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.834 [2024-04-24 21:41:35.584227] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.834 [2024-04-24 21:41:35.584246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.834 qpair failed and we were unable to recover it. 00:26:12.834 [2024-04-24 21:41:35.594266] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.834 [2024-04-24 21:41:35.594396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.834 [2024-04-24 21:41:35.594415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.834 [2024-04-24 21:41:35.594425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.834 [2024-04-24 21:41:35.594433] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.834 [2024-04-24 21:41:35.594458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.834 qpair failed and we were unable to recover it. 00:26:12.834 [2024-04-24 21:41:35.604125] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.834 [2024-04-24 21:41:35.604250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.834 [2024-04-24 21:41:35.604271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.834 [2024-04-24 21:41:35.604281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.834 [2024-04-24 21:41:35.604291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.834 [2024-04-24 21:41:35.604310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.834 qpair failed and we were unable to recover it. 00:26:12.834 [2024-04-24 21:41:35.614157] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.834 [2024-04-24 21:41:35.614285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.834 [2024-04-24 21:41:35.614304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.834 [2024-04-24 21:41:35.614317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.834 [2024-04-24 21:41:35.614326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.834 [2024-04-24 21:41:35.614344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.834 qpair failed and we were unable to recover it. 00:26:12.834 [2024-04-24 21:41:35.624176] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.834 [2024-04-24 21:41:35.624302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.834 [2024-04-24 21:41:35.624321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.834 [2024-04-24 21:41:35.624330] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.834 [2024-04-24 21:41:35.624339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.834 [2024-04-24 21:41:35.624358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.834 qpair failed and we were unable to recover it. 00:26:12.834 [2024-04-24 21:41:35.634216] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.834 [2024-04-24 21:41:35.634345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.834 [2024-04-24 21:41:35.634364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.834 [2024-04-24 21:41:35.634374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.834 [2024-04-24 21:41:35.634382] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.834 [2024-04-24 21:41:35.634400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.834 qpair failed and we were unable to recover it. 00:26:12.834 [2024-04-24 21:41:35.644458] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.834 [2024-04-24 21:41:35.644612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.834 [2024-04-24 21:41:35.644630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.834 [2024-04-24 21:41:35.644640] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.834 [2024-04-24 21:41:35.644649] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.834 [2024-04-24 21:41:35.644668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.834 qpair failed and we were unable to recover it. 00:26:12.834 [2024-04-24 21:41:35.654266] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.834 [2024-04-24 21:41:35.654388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.834 [2024-04-24 21:41:35.654407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.834 [2024-04-24 21:41:35.654417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.834 [2024-04-24 21:41:35.654425] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.834 [2024-04-24 21:41:35.654444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.834 qpair failed and we were unable to recover it. 00:26:12.834 [2024-04-24 21:41:35.664268] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.834 [2024-04-24 21:41:35.664401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.834 [2024-04-24 21:41:35.664420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.834 [2024-04-24 21:41:35.664430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.834 [2024-04-24 21:41:35.664439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.834 [2024-04-24 21:41:35.664461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.834 qpair failed and we were unable to recover it. 00:26:12.834 [2024-04-24 21:41:35.674304] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.834 [2024-04-24 21:41:35.674426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.834 [2024-04-24 21:41:35.674445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.834 [2024-04-24 21:41:35.674461] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.834 [2024-04-24 21:41:35.674470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.834 [2024-04-24 21:41:35.674489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.834 qpair failed and we were unable to recover it. 00:26:12.834 [2024-04-24 21:41:35.684345] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.834 [2024-04-24 21:41:35.684475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.834 [2024-04-24 21:41:35.684494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.834 [2024-04-24 21:41:35.684504] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.834 [2024-04-24 21:41:35.684513] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.834 [2024-04-24 21:41:35.684531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.834 qpair failed and we were unable to recover it. 00:26:12.834 [2024-04-24 21:41:35.694386] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.834 [2024-04-24 21:41:35.694513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.834 [2024-04-24 21:41:35.694532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.834 [2024-04-24 21:41:35.694550] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.834 [2024-04-24 21:41:35.694559] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.834 [2024-04-24 21:41:35.694580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.834 qpair failed and we were unable to recover it. 00:26:12.834 [2024-04-24 21:41:35.704403] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.834 [2024-04-24 21:41:35.704528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.834 [2024-04-24 21:41:35.704548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.834 [2024-04-24 21:41:35.704560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.834 [2024-04-24 21:41:35.704569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.834 [2024-04-24 21:41:35.704588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.834 qpair failed and we were unable to recover it. 00:26:12.834 [2024-04-24 21:41:35.714454] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:12.834 [2024-04-24 21:41:35.714614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:12.835 [2024-04-24 21:41:35.714633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:12.835 [2024-04-24 21:41:35.714643] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:12.835 [2024-04-24 21:41:35.714651] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:12.835 [2024-04-24 21:41:35.714670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:12.835 qpair failed and we were unable to recover it. 00:26:13.097 [2024-04-24 21:41:35.724390] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.097 [2024-04-24 21:41:35.724525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.097 [2024-04-24 21:41:35.724544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.097 [2024-04-24 21:41:35.724553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.097 [2024-04-24 21:41:35.724562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.097 [2024-04-24 21:41:35.724580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.097 qpair failed and we were unable to recover it. 00:26:13.097 [2024-04-24 21:41:35.734485] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.097 [2024-04-24 21:41:35.734604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.097 [2024-04-24 21:41:35.734622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.097 [2024-04-24 21:41:35.734632] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.097 [2024-04-24 21:41:35.734641] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.097 [2024-04-24 21:41:35.734659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.097 qpair failed and we were unable to recover it. 00:26:13.097 [2024-04-24 21:41:35.744476] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.097 [2024-04-24 21:41:35.744609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.097 [2024-04-24 21:41:35.744627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.097 [2024-04-24 21:41:35.744637] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.097 [2024-04-24 21:41:35.744645] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.097 [2024-04-24 21:41:35.744664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.097 qpair failed and we were unable to recover it. 00:26:13.097 [2024-04-24 21:41:35.754483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.097 [2024-04-24 21:41:35.754620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.097 [2024-04-24 21:41:35.754639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.097 [2024-04-24 21:41:35.754649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.097 [2024-04-24 21:41:35.754657] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.097 [2024-04-24 21:41:35.754676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.097 qpair failed and we were unable to recover it. 00:26:13.097 [2024-04-24 21:41:35.764579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.097 [2024-04-24 21:41:35.764720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.097 [2024-04-24 21:41:35.764739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.098 [2024-04-24 21:41:35.764749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.098 [2024-04-24 21:41:35.764757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.098 [2024-04-24 21:41:35.764775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.098 qpair failed and we were unable to recover it. 00:26:13.098 [2024-04-24 21:41:35.774599] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.098 [2024-04-24 21:41:35.774722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.098 [2024-04-24 21:41:35.774740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.098 [2024-04-24 21:41:35.774751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.098 [2024-04-24 21:41:35.774759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.098 [2024-04-24 21:41:35.774778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.098 qpair failed and we were unable to recover it. 00:26:13.098 [2024-04-24 21:41:35.784646] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.098 [2024-04-24 21:41:35.784957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.098 [2024-04-24 21:41:35.784977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.098 [2024-04-24 21:41:35.784987] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.098 [2024-04-24 21:41:35.784995] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.098 [2024-04-24 21:41:35.785014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.098 qpair failed and we were unable to recover it. 00:26:13.098 [2024-04-24 21:41:35.794675] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.098 [2024-04-24 21:41:35.794807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.098 [2024-04-24 21:41:35.794828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.098 [2024-04-24 21:41:35.794838] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.098 [2024-04-24 21:41:35.794847] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.098 [2024-04-24 21:41:35.794865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.098 qpair failed and we were unable to recover it. 00:26:13.098 [2024-04-24 21:41:35.804700] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.098 [2024-04-24 21:41:35.804827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.098 [2024-04-24 21:41:35.804846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.098 [2024-04-24 21:41:35.804856] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.098 [2024-04-24 21:41:35.804865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.098 [2024-04-24 21:41:35.804883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.098 qpair failed and we were unable to recover it. 00:26:13.098 [2024-04-24 21:41:35.814692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.098 [2024-04-24 21:41:35.814813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.098 [2024-04-24 21:41:35.814832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.098 [2024-04-24 21:41:35.814843] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.098 [2024-04-24 21:41:35.814851] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.098 [2024-04-24 21:41:35.814869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.098 qpair failed and we were unable to recover it. 00:26:13.098 [2024-04-24 21:41:35.824741] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.098 [2024-04-24 21:41:35.824866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.098 [2024-04-24 21:41:35.824885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.098 [2024-04-24 21:41:35.824895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.098 [2024-04-24 21:41:35.824903] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.098 [2024-04-24 21:41:35.824922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.098 qpair failed and we were unable to recover it. 00:26:13.098 [2024-04-24 21:41:35.834780] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.098 [2024-04-24 21:41:35.834906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.098 [2024-04-24 21:41:35.834925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.098 [2024-04-24 21:41:35.834935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.098 [2024-04-24 21:41:35.834943] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.098 [2024-04-24 21:41:35.834965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.098 qpair failed and we were unable to recover it. 00:26:13.098 [2024-04-24 21:41:35.844780] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.098 [2024-04-24 21:41:35.845118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.098 [2024-04-24 21:41:35.845139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.098 [2024-04-24 21:41:35.845149] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.098 [2024-04-24 21:41:35.845157] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.098 [2024-04-24 21:41:35.845176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.098 qpair failed and we were unable to recover it. 00:26:13.098 [2024-04-24 21:41:35.854829] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.098 [2024-04-24 21:41:35.854957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.098 [2024-04-24 21:41:35.854975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.098 [2024-04-24 21:41:35.854985] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.098 [2024-04-24 21:41:35.854994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.098 [2024-04-24 21:41:35.855012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.098 qpair failed and we were unable to recover it. 00:26:13.098 [2024-04-24 21:41:35.864860] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.098 [2024-04-24 21:41:35.864978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.098 [2024-04-24 21:41:35.864996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.098 [2024-04-24 21:41:35.865006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.098 [2024-04-24 21:41:35.865014] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.098 [2024-04-24 21:41:35.865032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.098 qpair failed and we were unable to recover it. 00:26:13.098 [2024-04-24 21:41:35.874817] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.098 [2024-04-24 21:41:35.874953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.098 [2024-04-24 21:41:35.874972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.098 [2024-04-24 21:41:35.874982] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.098 [2024-04-24 21:41:35.874991] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.098 [2024-04-24 21:41:35.875009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.098 qpair failed and we were unable to recover it. 00:26:13.098 [2024-04-24 21:41:35.884918] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.098 [2024-04-24 21:41:35.885043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.098 [2024-04-24 21:41:35.885065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.098 [2024-04-24 21:41:35.885075] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.098 [2024-04-24 21:41:35.885084] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.098 [2024-04-24 21:41:35.885102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.098 qpair failed and we were unable to recover it. 00:26:13.098 [2024-04-24 21:41:35.894856] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.098 [2024-04-24 21:41:35.894989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.098 [2024-04-24 21:41:35.895008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.098 [2024-04-24 21:41:35.895018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.098 [2024-04-24 21:41:35.895027] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.098 [2024-04-24 21:41:35.895044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.098 qpair failed and we were unable to recover it. 00:26:13.099 [2024-04-24 21:41:35.904969] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.099 [2024-04-24 21:41:35.905091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.099 [2024-04-24 21:41:35.905110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.099 [2024-04-24 21:41:35.905120] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.099 [2024-04-24 21:41:35.905128] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.099 [2024-04-24 21:41:35.905146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.099 qpair failed and we were unable to recover it. 00:26:13.099 [2024-04-24 21:41:35.914994] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.099 [2024-04-24 21:41:35.915139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.099 [2024-04-24 21:41:35.915159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.099 [2024-04-24 21:41:35.915168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.099 [2024-04-24 21:41:35.915177] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.099 [2024-04-24 21:41:35.915195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.099 qpair failed and we were unable to recover it. 00:26:13.099 [2024-04-24 21:41:35.925000] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.099 [2024-04-24 21:41:35.925140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.099 [2024-04-24 21:41:35.925159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.099 [2024-04-24 21:41:35.925169] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.099 [2024-04-24 21:41:35.925177] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.099 [2024-04-24 21:41:35.925197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.099 qpair failed and we were unable to recover it. 00:26:13.099 [2024-04-24 21:41:35.935004] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.099 [2024-04-24 21:41:35.935139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.099 [2024-04-24 21:41:35.935158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.099 [2024-04-24 21:41:35.935168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.099 [2024-04-24 21:41:35.935177] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.099 [2024-04-24 21:41:35.935196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.099 qpair failed and we were unable to recover it. 00:26:13.099 [2024-04-24 21:41:35.945053] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.099 [2024-04-24 21:41:35.945217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.099 [2024-04-24 21:41:35.945236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.099 [2024-04-24 21:41:35.945246] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.099 [2024-04-24 21:41:35.945254] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.099 [2024-04-24 21:41:35.945273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.099 qpair failed and we were unable to recover it. 00:26:13.099 [2024-04-24 21:41:35.955091] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.099 [2024-04-24 21:41:35.955214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.099 [2024-04-24 21:41:35.955234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.099 [2024-04-24 21:41:35.955243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.099 [2024-04-24 21:41:35.955252] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.099 [2024-04-24 21:41:35.955270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.099 qpair failed and we were unable to recover it. 00:26:13.099 [2024-04-24 21:41:35.965058] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.099 [2024-04-24 21:41:35.965183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.099 [2024-04-24 21:41:35.965202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.099 [2024-04-24 21:41:35.965212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.099 [2024-04-24 21:41:35.965221] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.099 [2024-04-24 21:41:35.965239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.099 qpair failed and we were unable to recover it. 00:26:13.099 [2024-04-24 21:41:35.975137] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.099 [2024-04-24 21:41:35.975262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.099 [2024-04-24 21:41:35.975284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.099 [2024-04-24 21:41:35.975294] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.099 [2024-04-24 21:41:35.975302] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.099 [2024-04-24 21:41:35.975321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.099 qpair failed and we were unable to recover it. 00:26:13.358 [2024-04-24 21:41:35.985237] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.358 [2024-04-24 21:41:35.985363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.358 [2024-04-24 21:41:35.985382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.358 [2024-04-24 21:41:35.985392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.358 [2024-04-24 21:41:35.985400] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.358 [2024-04-24 21:41:35.985419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.358 qpair failed and we were unable to recover it. 00:26:13.358 [2024-04-24 21:41:35.995227] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.358 [2024-04-24 21:41:35.995351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.358 [2024-04-24 21:41:35.995370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.358 [2024-04-24 21:41:35.995380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.358 [2024-04-24 21:41:35.995389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.358 [2024-04-24 21:41:35.995408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.358 qpair failed and we were unable to recover it. 00:26:13.358 [2024-04-24 21:41:36.005240] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.358 [2024-04-24 21:41:36.005403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.359 [2024-04-24 21:41:36.005422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.359 [2024-04-24 21:41:36.005433] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.359 [2024-04-24 21:41:36.005441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.359 [2024-04-24 21:41:36.005466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.359 qpair failed and we were unable to recover it. 00:26:13.359 [2024-04-24 21:41:36.015194] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.359 [2024-04-24 21:41:36.015315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.359 [2024-04-24 21:41:36.015334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.359 [2024-04-24 21:41:36.015344] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.359 [2024-04-24 21:41:36.015353] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.359 [2024-04-24 21:41:36.015375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.359 qpair failed and we were unable to recover it. 00:26:13.359 [2024-04-24 21:41:36.025513] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.359 [2024-04-24 21:41:36.025648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.359 [2024-04-24 21:41:36.025667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.359 [2024-04-24 21:41:36.025677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.359 [2024-04-24 21:41:36.025686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.359 [2024-04-24 21:41:36.025704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.359 qpair failed and we were unable to recover it. 00:26:13.359 [2024-04-24 21:41:36.035434] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.359 [2024-04-24 21:41:36.035561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.359 [2024-04-24 21:41:36.035580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.359 [2024-04-24 21:41:36.035590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.359 [2024-04-24 21:41:36.035598] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.359 [2024-04-24 21:41:36.035617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.359 qpair failed and we were unable to recover it. 00:26:13.359 [2024-04-24 21:41:36.045278] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.359 [2024-04-24 21:41:36.045406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.359 [2024-04-24 21:41:36.045425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.359 [2024-04-24 21:41:36.045434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.359 [2024-04-24 21:41:36.045443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.359 [2024-04-24 21:41:36.045469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.359 qpair failed and we were unable to recover it. 00:26:13.359 [2024-04-24 21:41:36.055355] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.359 [2024-04-24 21:41:36.055513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.359 [2024-04-24 21:41:36.055532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.359 [2024-04-24 21:41:36.055542] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.359 [2024-04-24 21:41:36.055550] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.359 [2024-04-24 21:41:36.055569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.359 qpair failed and we were unable to recover it. 00:26:13.359 [2024-04-24 21:41:36.065322] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.359 [2024-04-24 21:41:36.065454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.359 [2024-04-24 21:41:36.065476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.359 [2024-04-24 21:41:36.065486] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.359 [2024-04-24 21:41:36.065495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.359 [2024-04-24 21:41:36.065513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.359 qpair failed and we were unable to recover it. 00:26:13.359 [2024-04-24 21:41:36.075402] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.359 [2024-04-24 21:41:36.075557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.359 [2024-04-24 21:41:36.075577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.359 [2024-04-24 21:41:36.075587] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.359 [2024-04-24 21:41:36.075595] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.359 [2024-04-24 21:41:36.075614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.359 qpair failed and we were unable to recover it. 00:26:13.359 [2024-04-24 21:41:36.085403] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.359 [2024-04-24 21:41:36.085540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.359 [2024-04-24 21:41:36.085559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.359 [2024-04-24 21:41:36.085569] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.359 [2024-04-24 21:41:36.085578] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.359 [2024-04-24 21:41:36.085596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.359 qpair failed and we were unable to recover it. 00:26:13.359 [2024-04-24 21:41:36.095498] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.359 [2024-04-24 21:41:36.095818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.359 [2024-04-24 21:41:36.095838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.359 [2024-04-24 21:41:36.095848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.359 [2024-04-24 21:41:36.095857] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.359 [2024-04-24 21:41:36.095876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.359 qpair failed and we were unable to recover it. 00:26:13.359 [2024-04-24 21:41:36.105505] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.359 [2024-04-24 21:41:36.105662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.359 [2024-04-24 21:41:36.105681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.359 [2024-04-24 21:41:36.105691] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.359 [2024-04-24 21:41:36.105702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.359 [2024-04-24 21:41:36.105721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.359 qpair failed and we were unable to recover it. 00:26:13.359 [2024-04-24 21:41:36.115583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.359 [2024-04-24 21:41:36.115723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.359 [2024-04-24 21:41:36.115742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.359 [2024-04-24 21:41:36.115752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.359 [2024-04-24 21:41:36.115760] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.359 [2024-04-24 21:41:36.115778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.359 qpair failed and we were unable to recover it. 00:26:13.359 [2024-04-24 21:41:36.125527] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.359 [2024-04-24 21:41:36.125667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.360 [2024-04-24 21:41:36.125685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.360 [2024-04-24 21:41:36.125695] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.360 [2024-04-24 21:41:36.125704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.360 [2024-04-24 21:41:36.125722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.360 qpair failed and we were unable to recover it. 00:26:13.360 [2024-04-24 21:41:36.135607] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.360 [2024-04-24 21:41:36.135735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.360 [2024-04-24 21:41:36.135754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.360 [2024-04-24 21:41:36.135764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.360 [2024-04-24 21:41:36.135773] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.360 [2024-04-24 21:41:36.135791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.360 qpair failed and we were unable to recover it. 00:26:13.360 [2024-04-24 21:41:36.145665] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.360 [2024-04-24 21:41:36.145793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.360 [2024-04-24 21:41:36.145812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.360 [2024-04-24 21:41:36.145822] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.360 [2024-04-24 21:41:36.145831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.360 [2024-04-24 21:41:36.145849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.360 qpair failed and we were unable to recover it. 00:26:13.360 [2024-04-24 21:41:36.155685] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.360 [2024-04-24 21:41:36.155815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.360 [2024-04-24 21:41:36.155833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.360 [2024-04-24 21:41:36.155843] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.360 [2024-04-24 21:41:36.155852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.360 [2024-04-24 21:41:36.155871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.360 qpair failed and we were unable to recover it. 00:26:13.360 [2024-04-24 21:41:36.165732] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.360 [2024-04-24 21:41:36.165856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.360 [2024-04-24 21:41:36.165875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.360 [2024-04-24 21:41:36.165885] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.360 [2024-04-24 21:41:36.165894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.360 [2024-04-24 21:41:36.165913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.360 qpair failed and we were unable to recover it. 00:26:13.360 [2024-04-24 21:41:36.175717] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.360 [2024-04-24 21:41:36.175840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.360 [2024-04-24 21:41:36.175860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.360 [2024-04-24 21:41:36.175870] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.360 [2024-04-24 21:41:36.175878] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.360 [2024-04-24 21:41:36.175897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.360 qpair failed and we were unable to recover it. 00:26:13.360 [2024-04-24 21:41:36.185749] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.360 [2024-04-24 21:41:36.185870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.360 [2024-04-24 21:41:36.185889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.360 [2024-04-24 21:41:36.185899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.360 [2024-04-24 21:41:36.185908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.360 [2024-04-24 21:41:36.185926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.360 qpair failed and we were unable to recover it. 00:26:13.360 [2024-04-24 21:41:36.195823] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.360 [2024-04-24 21:41:36.195964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.360 [2024-04-24 21:41:36.195982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.360 [2024-04-24 21:41:36.195993] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.360 [2024-04-24 21:41:36.196005] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.360 [2024-04-24 21:41:36.196023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.360 qpair failed and we were unable to recover it. 00:26:13.360 [2024-04-24 21:41:36.205745] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.360 [2024-04-24 21:41:36.205870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.360 [2024-04-24 21:41:36.205889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.360 [2024-04-24 21:41:36.205899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.360 [2024-04-24 21:41:36.205908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.360 [2024-04-24 21:41:36.205926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.360 qpair failed and we were unable to recover it. 00:26:13.360 [2024-04-24 21:41:36.215822] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.360 [2024-04-24 21:41:36.215943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.360 [2024-04-24 21:41:36.215962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.360 [2024-04-24 21:41:36.215972] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.360 [2024-04-24 21:41:36.215981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.360 [2024-04-24 21:41:36.215999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.360 qpair failed and we were unable to recover it. 00:26:13.360 [2024-04-24 21:41:36.225871] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.360 [2024-04-24 21:41:36.225996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.360 [2024-04-24 21:41:36.226016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.360 [2024-04-24 21:41:36.226026] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.360 [2024-04-24 21:41:36.226035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.360 [2024-04-24 21:41:36.226053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.360 qpair failed and we were unable to recover it. 00:26:13.360 [2024-04-24 21:41:36.235885] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.360 [2024-04-24 21:41:36.236009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.360 [2024-04-24 21:41:36.236028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.360 [2024-04-24 21:41:36.236038] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.360 [2024-04-24 21:41:36.236047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.360 [2024-04-24 21:41:36.236066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.360 qpair failed and we were unable to recover it. 00:26:13.649 [2024-04-24 21:41:36.245914] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.649 [2024-04-24 21:41:36.246045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.649 [2024-04-24 21:41:36.246064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.649 [2024-04-24 21:41:36.246074] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.649 [2024-04-24 21:41:36.246083] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.649 [2024-04-24 21:41:36.246102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-04-24 21:41:36.255881] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.649 [2024-04-24 21:41:36.256005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.649 [2024-04-24 21:41:36.256024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.649 [2024-04-24 21:41:36.256034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.649 [2024-04-24 21:41:36.256043] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.649 [2024-04-24 21:41:36.256061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-04-24 21:41:36.265967] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.649 [2024-04-24 21:41:36.266090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.649 [2024-04-24 21:41:36.266109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.649 [2024-04-24 21:41:36.266118] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.649 [2024-04-24 21:41:36.266127] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.649 [2024-04-24 21:41:36.266145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-04-24 21:41:36.275948] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.649 [2024-04-24 21:41:36.276071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.649 [2024-04-24 21:41:36.276090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.649 [2024-04-24 21:41:36.276100] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.649 [2024-04-24 21:41:36.276109] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.649 [2024-04-24 21:41:36.276127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-04-24 21:41:36.286020] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.649 [2024-04-24 21:41:36.286151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.649 [2024-04-24 21:41:36.286170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.649 [2024-04-24 21:41:36.286184] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.649 [2024-04-24 21:41:36.286192] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.649 [2024-04-24 21:41:36.286211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-04-24 21:41:36.296011] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.649 [2024-04-24 21:41:36.296138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.649 [2024-04-24 21:41:36.296157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.649 [2024-04-24 21:41:36.296167] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.649 [2024-04-24 21:41:36.296175] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.649 [2024-04-24 21:41:36.296194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-04-24 21:41:36.306102] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.649 [2024-04-24 21:41:36.306226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.649 [2024-04-24 21:41:36.306244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.649 [2024-04-24 21:41:36.306255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.649 [2024-04-24 21:41:36.306263] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.649 [2024-04-24 21:41:36.306282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-04-24 21:41:36.316065] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.649 [2024-04-24 21:41:36.316189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.649 [2024-04-24 21:41:36.316208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.649 [2024-04-24 21:41:36.316218] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.649 [2024-04-24 21:41:36.316226] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.649 [2024-04-24 21:41:36.316245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-04-24 21:41:36.326085] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.649 [2024-04-24 21:41:36.326210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.649 [2024-04-24 21:41:36.326229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.649 [2024-04-24 21:41:36.326240] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.649 [2024-04-24 21:41:36.326248] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.649 [2024-04-24 21:41:36.326267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-04-24 21:41:36.336185] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.649 [2024-04-24 21:41:36.336307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.649 [2024-04-24 21:41:36.336325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.649 [2024-04-24 21:41:36.336336] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.649 [2024-04-24 21:41:36.336344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.649 [2024-04-24 21:41:36.336362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-04-24 21:41:36.346199] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.649 [2024-04-24 21:41:36.346319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.649 [2024-04-24 21:41:36.346337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.649 [2024-04-24 21:41:36.346347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.649 [2024-04-24 21:41:36.346356] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.649 [2024-04-24 21:41:36.346374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.649 [2024-04-24 21:41:36.356191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.649 [2024-04-24 21:41:36.356315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.649 [2024-04-24 21:41:36.356333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.649 [2024-04-24 21:41:36.356343] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.649 [2024-04-24 21:41:36.356352] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.649 [2024-04-24 21:41:36.356370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.649 qpair failed and we were unable to recover it. 00:26:13.650 [2024-04-24 21:41:36.366250] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.650 [2024-04-24 21:41:36.366375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.650 [2024-04-24 21:41:36.366394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.650 [2024-04-24 21:41:36.366404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.650 [2024-04-24 21:41:36.366412] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.650 [2024-04-24 21:41:36.366431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-04-24 21:41:36.376295] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.650 [2024-04-24 21:41:36.376461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.650 [2024-04-24 21:41:36.376480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.650 [2024-04-24 21:41:36.376493] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.650 [2024-04-24 21:41:36.376502] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.650 [2024-04-24 21:41:36.376520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-04-24 21:41:36.386348] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.650 [2024-04-24 21:41:36.386490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.650 [2024-04-24 21:41:36.386508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.650 [2024-04-24 21:41:36.386518] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.650 [2024-04-24 21:41:36.386527] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.650 [2024-04-24 21:41:36.386545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-04-24 21:41:36.396284] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.650 [2024-04-24 21:41:36.396411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.650 [2024-04-24 21:41:36.396430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.650 [2024-04-24 21:41:36.396440] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.650 [2024-04-24 21:41:36.396449] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.650 [2024-04-24 21:41:36.396473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-04-24 21:41:36.406307] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.650 [2024-04-24 21:41:36.406435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.650 [2024-04-24 21:41:36.406461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.650 [2024-04-24 21:41:36.406472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.650 [2024-04-24 21:41:36.406480] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.650 [2024-04-24 21:41:36.406498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-04-24 21:41:36.416406] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.650 [2024-04-24 21:41:36.416536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.650 [2024-04-24 21:41:36.416555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.650 [2024-04-24 21:41:36.416565] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.650 [2024-04-24 21:41:36.416574] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.650 [2024-04-24 21:41:36.416593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-04-24 21:41:36.426358] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.650 [2024-04-24 21:41:36.426495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.650 [2024-04-24 21:41:36.426516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.650 [2024-04-24 21:41:36.426526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.650 [2024-04-24 21:41:36.426535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.650 [2024-04-24 21:41:36.426554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-04-24 21:41:36.436403] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.650 [2024-04-24 21:41:36.436571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.650 [2024-04-24 21:41:36.436590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.650 [2024-04-24 21:41:36.436600] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.650 [2024-04-24 21:41:36.436608] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.650 [2024-04-24 21:41:36.436626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-04-24 21:41:36.446487] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.650 [2024-04-24 21:41:36.446613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.650 [2024-04-24 21:41:36.446632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.650 [2024-04-24 21:41:36.446641] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.650 [2024-04-24 21:41:36.446650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.650 [2024-04-24 21:41:36.446668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-04-24 21:41:36.456547] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.650 [2024-04-24 21:41:36.456719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.650 [2024-04-24 21:41:36.456738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.650 [2024-04-24 21:41:36.456748] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.650 [2024-04-24 21:41:36.456756] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.650 [2024-04-24 21:41:36.456775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-04-24 21:41:36.466492] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.650 [2024-04-24 21:41:36.466617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.650 [2024-04-24 21:41:36.466636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.650 [2024-04-24 21:41:36.466649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.650 [2024-04-24 21:41:36.466658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.650 [2024-04-24 21:41:36.466677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-04-24 21:41:36.476591] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.650 [2024-04-24 21:41:36.476717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.650 [2024-04-24 21:41:36.476738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.650 [2024-04-24 21:41:36.476748] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.650 [2024-04-24 21:41:36.476757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.650 [2024-04-24 21:41:36.476776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-04-24 21:41:36.486602] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.650 [2024-04-24 21:41:36.486924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.650 [2024-04-24 21:41:36.486942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.650 [2024-04-24 21:41:36.486952] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.650 [2024-04-24 21:41:36.486960] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.650 [2024-04-24 21:41:36.486978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.650 qpair failed and we were unable to recover it. 00:26:13.650 [2024-04-24 21:41:36.496639] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.650 [2024-04-24 21:41:36.496768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.651 [2024-04-24 21:41:36.496787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.651 [2024-04-24 21:41:36.496797] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.651 [2024-04-24 21:41:36.496806] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.651 [2024-04-24 21:41:36.496825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.651 qpair failed and we were unable to recover it. 00:26:13.910 [2024-04-24 21:41:36.506607] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.910 [2024-04-24 21:41:36.506736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.910 [2024-04-24 21:41:36.506755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.910 [2024-04-24 21:41:36.506765] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.910 [2024-04-24 21:41:36.506773] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.910 [2024-04-24 21:41:36.506792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.910 qpair failed and we were unable to recover it. 00:26:13.910 [2024-04-24 21:41:36.516714] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.910 [2024-04-24 21:41:36.516843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.910 [2024-04-24 21:41:36.516862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.910 [2024-04-24 21:41:36.516872] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.910 [2024-04-24 21:41:36.516881] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.910 [2024-04-24 21:41:36.516900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.910 qpair failed and we were unable to recover it. 00:26:13.910 [2024-04-24 21:41:36.526713] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.910 [2024-04-24 21:41:36.526843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.910 [2024-04-24 21:41:36.526862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.910 [2024-04-24 21:41:36.526872] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.910 [2024-04-24 21:41:36.526881] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.910 [2024-04-24 21:41:36.526899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.910 qpair failed and we were unable to recover it. 00:26:13.910 [2024-04-24 21:41:36.536691] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.910 [2024-04-24 21:41:36.536823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.910 [2024-04-24 21:41:36.536842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.910 [2024-04-24 21:41:36.536852] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.910 [2024-04-24 21:41:36.536861] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.910 [2024-04-24 21:41:36.536879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.910 qpair failed and we were unable to recover it. 00:26:13.910 [2024-04-24 21:41:36.546774] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.910 [2024-04-24 21:41:36.546898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.910 [2024-04-24 21:41:36.546917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.910 [2024-04-24 21:41:36.546927] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.910 [2024-04-24 21:41:36.546935] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.910 [2024-04-24 21:41:36.546954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.911 qpair failed and we were unable to recover it. 00:26:13.911 [2024-04-24 21:41:36.556830] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.911 [2024-04-24 21:41:36.556957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.911 [2024-04-24 21:41:36.556978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.911 [2024-04-24 21:41:36.556989] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.911 [2024-04-24 21:41:36.556997] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.911 [2024-04-24 21:41:36.557015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.911 qpair failed and we were unable to recover it. 00:26:13.911 [2024-04-24 21:41:36.566847] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.911 [2024-04-24 21:41:36.566976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.911 [2024-04-24 21:41:36.566994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.911 [2024-04-24 21:41:36.567004] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.911 [2024-04-24 21:41:36.567013] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.911 [2024-04-24 21:41:36.567031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.911 qpair failed and we were unable to recover it. 00:26:13.911 [2024-04-24 21:41:36.576899] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.911 [2024-04-24 21:41:36.577039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.911 [2024-04-24 21:41:36.577059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.911 [2024-04-24 21:41:36.577069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.911 [2024-04-24 21:41:36.577078] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.911 [2024-04-24 21:41:36.577097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.911 qpair failed and we were unable to recover it. 00:26:13.911 [2024-04-24 21:41:36.586921] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.911 [2024-04-24 21:41:36.587060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.911 [2024-04-24 21:41:36.587079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.911 [2024-04-24 21:41:36.587088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.911 [2024-04-24 21:41:36.587097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.911 [2024-04-24 21:41:36.587115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.911 qpair failed and we were unable to recover it. 00:26:13.911 [2024-04-24 21:41:36.596981] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.911 [2024-04-24 21:41:36.597107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.911 [2024-04-24 21:41:36.597126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.911 [2024-04-24 21:41:36.597136] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.911 [2024-04-24 21:41:36.597144] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.911 [2024-04-24 21:41:36.597162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.911 qpair failed and we were unable to recover it. 00:26:13.911 [2024-04-24 21:41:36.606972] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.911 [2024-04-24 21:41:36.607115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.911 [2024-04-24 21:41:36.607134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.911 [2024-04-24 21:41:36.607144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.911 [2024-04-24 21:41:36.607153] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.911 [2024-04-24 21:41:36.607171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.911 qpair failed and we were unable to recover it. 00:26:13.911 [2024-04-24 21:41:36.616993] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.911 [2024-04-24 21:41:36.617122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.911 [2024-04-24 21:41:36.617141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.911 [2024-04-24 21:41:36.617151] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.911 [2024-04-24 21:41:36.617159] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.911 [2024-04-24 21:41:36.617178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.911 qpair failed and we were unable to recover it. 00:26:13.911 [2024-04-24 21:41:36.627019] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.911 [2024-04-24 21:41:36.627148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.911 [2024-04-24 21:41:36.627167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.911 [2024-04-24 21:41:36.627177] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.911 [2024-04-24 21:41:36.627186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.911 [2024-04-24 21:41:36.627204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.911 qpair failed and we were unable to recover it. 00:26:13.911 [2024-04-24 21:41:36.637094] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.911 [2024-04-24 21:41:36.637225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.911 [2024-04-24 21:41:36.637245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.911 [2024-04-24 21:41:36.637255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.911 [2024-04-24 21:41:36.637263] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.911 [2024-04-24 21:41:36.637281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.911 qpair failed and we were unable to recover it. 00:26:13.911 [2024-04-24 21:41:36.647095] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.911 [2024-04-24 21:41:36.647237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.911 [2024-04-24 21:41:36.647259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.911 [2024-04-24 21:41:36.647269] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.911 [2024-04-24 21:41:36.647277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.911 [2024-04-24 21:41:36.647296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.911 qpair failed and we were unable to recover it. 00:26:13.911 [2024-04-24 21:41:36.657035] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.911 [2024-04-24 21:41:36.657158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.911 [2024-04-24 21:41:36.657178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.911 [2024-04-24 21:41:36.657188] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.911 [2024-04-24 21:41:36.657197] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.911 [2024-04-24 21:41:36.657215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.911 qpair failed and we were unable to recover it. 00:26:13.911 [2024-04-24 21:41:36.667134] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.911 [2024-04-24 21:41:36.667266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.911 [2024-04-24 21:41:36.667285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.911 [2024-04-24 21:41:36.667295] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.911 [2024-04-24 21:41:36.667303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.911 [2024-04-24 21:41:36.667321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.911 qpair failed and we were unable to recover it. 00:26:13.911 [2024-04-24 21:41:36.677129] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.911 [2024-04-24 21:41:36.677269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.911 [2024-04-24 21:41:36.677288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.911 [2024-04-24 21:41:36.677298] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.911 [2024-04-24 21:41:36.677307] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.911 [2024-04-24 21:41:36.677325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.911 qpair failed and we were unable to recover it. 00:26:13.911 [2024-04-24 21:41:36.687120] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.911 [2024-04-24 21:41:36.687249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.912 [2024-04-24 21:41:36.687268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.912 [2024-04-24 21:41:36.687277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.912 [2024-04-24 21:41:36.687286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.912 [2024-04-24 21:41:36.687308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.912 qpair failed and we were unable to recover it. 00:26:13.912 [2024-04-24 21:41:36.697214] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.912 [2024-04-24 21:41:36.697340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.912 [2024-04-24 21:41:36.697359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.912 [2024-04-24 21:41:36.697369] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.912 [2024-04-24 21:41:36.697378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.912 [2024-04-24 21:41:36.697396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.912 qpair failed and we were unable to recover it. 00:26:13.912 [2024-04-24 21:41:36.707243] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.912 [2024-04-24 21:41:36.707367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.912 [2024-04-24 21:41:36.707386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.912 [2024-04-24 21:41:36.707396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.912 [2024-04-24 21:41:36.707404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.912 [2024-04-24 21:41:36.707423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.912 qpair failed and we were unable to recover it. 00:26:13.912 [2024-04-24 21:41:36.717280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.912 [2024-04-24 21:41:36.717406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.912 [2024-04-24 21:41:36.717425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.912 [2024-04-24 21:41:36.717435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.912 [2024-04-24 21:41:36.717443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.912 [2024-04-24 21:41:36.717467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.912 qpair failed and we were unable to recover it. 00:26:13.912 [2024-04-24 21:41:36.727305] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.912 [2024-04-24 21:41:36.727438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.912 [2024-04-24 21:41:36.727465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.912 [2024-04-24 21:41:36.727475] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.912 [2024-04-24 21:41:36.727484] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.912 [2024-04-24 21:41:36.727503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.912 qpair failed and we were unable to recover it. 00:26:13.912 [2024-04-24 21:41:36.737347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.912 [2024-04-24 21:41:36.737479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.912 [2024-04-24 21:41:36.737501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.912 [2024-04-24 21:41:36.737511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.912 [2024-04-24 21:41:36.737520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.912 [2024-04-24 21:41:36.737539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.912 qpair failed and we were unable to recover it. 00:26:13.912 [2024-04-24 21:41:36.747361] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.912 [2024-04-24 21:41:36.747487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.912 [2024-04-24 21:41:36.747506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.912 [2024-04-24 21:41:36.747516] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.912 [2024-04-24 21:41:36.747524] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.912 [2024-04-24 21:41:36.747542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.912 qpair failed and we were unable to recover it. 00:26:13.912 [2024-04-24 21:41:36.757347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.912 [2024-04-24 21:41:36.757481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.912 [2024-04-24 21:41:36.757499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.912 [2024-04-24 21:41:36.757509] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.912 [2024-04-24 21:41:36.757518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.912 [2024-04-24 21:41:36.757536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.912 qpair failed and we were unable to recover it. 00:26:13.912 [2024-04-24 21:41:36.767435] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.912 [2024-04-24 21:41:36.767576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.912 [2024-04-24 21:41:36.767596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.912 [2024-04-24 21:41:36.767606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.912 [2024-04-24 21:41:36.767614] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.912 [2024-04-24 21:41:36.767633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.912 qpair failed and we were unable to recover it. 00:26:13.912 [2024-04-24 21:41:36.777368] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.912 [2024-04-24 21:41:36.777498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.912 [2024-04-24 21:41:36.777517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.912 [2024-04-24 21:41:36.777527] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.912 [2024-04-24 21:41:36.777536] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.912 [2024-04-24 21:41:36.777557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.912 qpair failed and we were unable to recover it. 00:26:13.912 [2024-04-24 21:41:36.787477] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:13.912 [2024-04-24 21:41:36.787597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:13.912 [2024-04-24 21:41:36.787615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:13.912 [2024-04-24 21:41:36.787625] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:13.912 [2024-04-24 21:41:36.787634] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:13.912 [2024-04-24 21:41:36.787652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:13.912 qpair failed and we were unable to recover it. 00:26:14.171 [2024-04-24 21:41:36.797439] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.171 [2024-04-24 21:41:36.797572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.171 [2024-04-24 21:41:36.797591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.171 [2024-04-24 21:41:36.797601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.171 [2024-04-24 21:41:36.797609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.172 [2024-04-24 21:41:36.797628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.172 qpair failed and we were unable to recover it. 00:26:14.172 [2024-04-24 21:41:36.807528] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.172 [2024-04-24 21:41:36.807674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.172 [2024-04-24 21:41:36.807693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.172 [2024-04-24 21:41:36.807703] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.172 [2024-04-24 21:41:36.807711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.172 [2024-04-24 21:41:36.807730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.172 qpair failed and we were unable to recover it. 00:26:14.172 [2024-04-24 21:41:36.817601] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.172 [2024-04-24 21:41:36.817744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.172 [2024-04-24 21:41:36.817763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.172 [2024-04-24 21:41:36.817773] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.172 [2024-04-24 21:41:36.817781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.172 [2024-04-24 21:41:36.817799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.172 qpair failed and we were unable to recover it. 00:26:14.172 [2024-04-24 21:41:36.827598] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.172 [2024-04-24 21:41:36.827722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.172 [2024-04-24 21:41:36.827744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.172 [2024-04-24 21:41:36.827754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.172 [2024-04-24 21:41:36.827762] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.172 [2024-04-24 21:41:36.827780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.172 qpair failed and we were unable to recover it. 00:26:14.172 [2024-04-24 21:41:36.837617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.172 [2024-04-24 21:41:36.837742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.172 [2024-04-24 21:41:36.837761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.172 [2024-04-24 21:41:36.837771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.172 [2024-04-24 21:41:36.837780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.172 [2024-04-24 21:41:36.837798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.172 qpair failed and we were unable to recover it. 00:26:14.172 [2024-04-24 21:41:36.847631] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.172 [2024-04-24 21:41:36.847809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.172 [2024-04-24 21:41:36.847828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.172 [2024-04-24 21:41:36.847838] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.172 [2024-04-24 21:41:36.847847] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.172 [2024-04-24 21:41:36.847865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.172 qpair failed and we were unable to recover it. 00:26:14.172 [2024-04-24 21:41:36.857591] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.172 [2024-04-24 21:41:36.857723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.172 [2024-04-24 21:41:36.857742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.172 [2024-04-24 21:41:36.857752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.172 [2024-04-24 21:41:36.857761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.172 [2024-04-24 21:41:36.857779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.172 qpair failed and we were unable to recover it. 00:26:14.172 [2024-04-24 21:41:36.867686] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.172 [2024-04-24 21:41:36.867813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.172 [2024-04-24 21:41:36.867832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.172 [2024-04-24 21:41:36.867842] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.172 [2024-04-24 21:41:36.867853] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.172 [2024-04-24 21:41:36.867872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.172 qpair failed and we were unable to recover it. 00:26:14.172 [2024-04-24 21:41:36.877726] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.172 [2024-04-24 21:41:36.877850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.172 [2024-04-24 21:41:36.877869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.172 [2024-04-24 21:41:36.877879] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.172 [2024-04-24 21:41:36.877887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.172 [2024-04-24 21:41:36.877906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.172 qpair failed and we were unable to recover it. 00:26:14.172 [2024-04-24 21:41:36.887739] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.172 [2024-04-24 21:41:36.887866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.172 [2024-04-24 21:41:36.887885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.172 [2024-04-24 21:41:36.887894] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.172 [2024-04-24 21:41:36.887903] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.172 [2024-04-24 21:41:36.887922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.172 qpair failed and we were unable to recover it. 00:26:14.172 [2024-04-24 21:41:36.897778] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.172 [2024-04-24 21:41:36.897922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.173 [2024-04-24 21:41:36.897941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.173 [2024-04-24 21:41:36.897951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.173 [2024-04-24 21:41:36.897960] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.173 [2024-04-24 21:41:36.897978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.173 qpair failed and we were unable to recover it. 00:26:14.173 [2024-04-24 21:41:36.907828] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.173 [2024-04-24 21:41:36.907970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.173 [2024-04-24 21:41:36.907989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.173 [2024-04-24 21:41:36.907998] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.173 [2024-04-24 21:41:36.908007] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.173 [2024-04-24 21:41:36.908025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.173 qpair failed and we were unable to recover it. 00:26:14.173 [2024-04-24 21:41:36.917821] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.173 [2024-04-24 21:41:36.917949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.173 [2024-04-24 21:41:36.917968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.173 [2024-04-24 21:41:36.917978] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.173 [2024-04-24 21:41:36.917987] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.173 [2024-04-24 21:41:36.918005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.173 qpair failed and we were unable to recover it. 00:26:14.173 [2024-04-24 21:41:36.927859] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.173 [2024-04-24 21:41:36.927983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.173 [2024-04-24 21:41:36.928002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.173 [2024-04-24 21:41:36.928012] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.173 [2024-04-24 21:41:36.928021] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.173 [2024-04-24 21:41:36.928040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.173 qpair failed and we were unable to recover it. 00:26:14.173 [2024-04-24 21:41:36.937881] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.173 [2024-04-24 21:41:36.938007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.173 [2024-04-24 21:41:36.938027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.173 [2024-04-24 21:41:36.938037] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.173 [2024-04-24 21:41:36.938046] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.173 [2024-04-24 21:41:36.938064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.173 qpair failed and we were unable to recover it. 00:26:14.173 [2024-04-24 21:41:36.947935] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.173 [2024-04-24 21:41:36.948072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.173 [2024-04-24 21:41:36.948091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.173 [2024-04-24 21:41:36.948101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.173 [2024-04-24 21:41:36.948109] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.173 [2024-04-24 21:41:36.948128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.173 qpair failed and we were unable to recover it. 00:26:14.173 [2024-04-24 21:41:36.957949] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.173 [2024-04-24 21:41:36.958071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.173 [2024-04-24 21:41:36.958090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.173 [2024-04-24 21:41:36.958100] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.173 [2024-04-24 21:41:36.958111] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.173 [2024-04-24 21:41:36.958129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.173 qpair failed and we were unable to recover it. 00:26:14.173 [2024-04-24 21:41:36.968006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.173 [2024-04-24 21:41:36.968149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.173 [2024-04-24 21:41:36.968168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.173 [2024-04-24 21:41:36.968178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.173 [2024-04-24 21:41:36.968186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.173 [2024-04-24 21:41:36.968204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.173 qpair failed and we were unable to recover it. 00:26:14.173 [2024-04-24 21:41:36.977991] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.173 [2024-04-24 21:41:36.978114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.173 [2024-04-24 21:41:36.978133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.173 [2024-04-24 21:41:36.978143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.173 [2024-04-24 21:41:36.978152] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.173 [2024-04-24 21:41:36.978170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.173 qpair failed and we were unable to recover it. 00:26:14.173 [2024-04-24 21:41:36.988020] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.173 [2024-04-24 21:41:36.988175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.173 [2024-04-24 21:41:36.988194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.173 [2024-04-24 21:41:36.988204] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.173 [2024-04-24 21:41:36.988212] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.173 [2024-04-24 21:41:36.988231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.173 qpair failed and we were unable to recover it. 00:26:14.173 [2024-04-24 21:41:36.997991] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.173 [2024-04-24 21:41:36.998118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.173 [2024-04-24 21:41:36.998137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.173 [2024-04-24 21:41:36.998147] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.173 [2024-04-24 21:41:36.998156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.173 [2024-04-24 21:41:36.998174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.173 qpair failed and we were unable to recover it. 00:26:14.173 [2024-04-24 21:41:37.008054] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.173 [2024-04-24 21:41:37.008181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.173 [2024-04-24 21:41:37.008200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.173 [2024-04-24 21:41:37.008209] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.173 [2024-04-24 21:41:37.008218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.173 [2024-04-24 21:41:37.008237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.173 qpair failed and we were unable to recover it. 00:26:14.173 [2024-04-24 21:41:37.018130] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.173 [2024-04-24 21:41:37.018289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.173 [2024-04-24 21:41:37.018308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.173 [2024-04-24 21:41:37.018317] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.173 [2024-04-24 21:41:37.018326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.173 [2024-04-24 21:41:37.018344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.174 qpair failed and we were unable to recover it. 00:26:14.174 [2024-04-24 21:41:37.028139] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.174 [2024-04-24 21:41:37.028261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.174 [2024-04-24 21:41:37.028280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.174 [2024-04-24 21:41:37.028290] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.174 [2024-04-24 21:41:37.028298] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.174 [2024-04-24 21:41:37.028317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.174 qpair failed and we were unable to recover it. 00:26:14.174 [2024-04-24 21:41:37.038177] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.174 [2024-04-24 21:41:37.038305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.174 [2024-04-24 21:41:37.038323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.174 [2024-04-24 21:41:37.038333] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.174 [2024-04-24 21:41:37.038342] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.174 [2024-04-24 21:41:37.038360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.174 qpair failed and we were unable to recover it. 00:26:14.174 [2024-04-24 21:41:37.048200] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.174 [2024-04-24 21:41:37.048326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.174 [2024-04-24 21:41:37.048345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.174 [2024-04-24 21:41:37.048355] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.174 [2024-04-24 21:41:37.048366] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.174 [2024-04-24 21:41:37.048385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.174 qpair failed and we were unable to recover it. 00:26:14.433 [2024-04-24 21:41:37.058212] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.433 [2024-04-24 21:41:37.058337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.433 [2024-04-24 21:41:37.058356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.433 [2024-04-24 21:41:37.058366] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.433 [2024-04-24 21:41:37.058375] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.433 [2024-04-24 21:41:37.058393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.433 qpair failed and we were unable to recover it. 00:26:14.433 [2024-04-24 21:41:37.068163] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.433 [2024-04-24 21:41:37.068332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.433 [2024-04-24 21:41:37.068350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.433 [2024-04-24 21:41:37.068359] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.433 [2024-04-24 21:41:37.068368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.433 [2024-04-24 21:41:37.068387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.433 qpair failed and we were unable to recover it. 00:26:14.433 [2024-04-24 21:41:37.078274] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.433 [2024-04-24 21:41:37.078399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.433 [2024-04-24 21:41:37.078418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.433 [2024-04-24 21:41:37.078428] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.433 [2024-04-24 21:41:37.078437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.433 [2024-04-24 21:41:37.078460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.433 qpair failed and we were unable to recover it. 00:26:14.433 [2024-04-24 21:41:37.088297] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.433 [2024-04-24 21:41:37.088428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.433 [2024-04-24 21:41:37.088447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.433 [2024-04-24 21:41:37.088461] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.433 [2024-04-24 21:41:37.088470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.433 [2024-04-24 21:41:37.088489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.433 qpair failed and we were unable to recover it. 00:26:14.433 [2024-04-24 21:41:37.098364] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.433 [2024-04-24 21:41:37.098536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.433 [2024-04-24 21:41:37.098556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.433 [2024-04-24 21:41:37.098566] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.433 [2024-04-24 21:41:37.098574] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.433 [2024-04-24 21:41:37.098593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.433 qpair failed and we were unable to recover it. 00:26:14.433 [2024-04-24 21:41:37.108361] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.433 [2024-04-24 21:41:37.108489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.433 [2024-04-24 21:41:37.108508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.433 [2024-04-24 21:41:37.108518] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.433 [2024-04-24 21:41:37.108527] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.433 [2024-04-24 21:41:37.108545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.433 qpair failed and we were unable to recover it. 00:26:14.433 [2024-04-24 21:41:37.118430] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.433 [2024-04-24 21:41:37.118569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.433 [2024-04-24 21:41:37.118589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.433 [2024-04-24 21:41:37.118599] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.433 [2024-04-24 21:41:37.118607] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.433 [2024-04-24 21:41:37.118625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.433 qpair failed and we were unable to recover it. 00:26:14.433 [2024-04-24 21:41:37.128454] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.433 [2024-04-24 21:41:37.128589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.433 [2024-04-24 21:41:37.128608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.433 [2024-04-24 21:41:37.128618] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.433 [2024-04-24 21:41:37.128626] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.433 [2024-04-24 21:41:37.128644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.433 qpair failed and we were unable to recover it. 00:26:14.433 [2024-04-24 21:41:37.138454] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.433 [2024-04-24 21:41:37.138773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.433 [2024-04-24 21:41:37.138793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.433 [2024-04-24 21:41:37.138806] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.433 [2024-04-24 21:41:37.138816] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.433 [2024-04-24 21:41:37.138834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.433 qpair failed and we were unable to recover it. 00:26:14.433 [2024-04-24 21:41:37.148481] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.433 [2024-04-24 21:41:37.148605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.433 [2024-04-24 21:41:37.148623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.433 [2024-04-24 21:41:37.148633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.433 [2024-04-24 21:41:37.148642] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.433 [2024-04-24 21:41:37.148660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.433 qpair failed and we were unable to recover it. 00:26:14.433 [2024-04-24 21:41:37.158517] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.433 [2024-04-24 21:41:37.158643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.433 [2024-04-24 21:41:37.158662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.434 [2024-04-24 21:41:37.158671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.434 [2024-04-24 21:41:37.158680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.434 [2024-04-24 21:41:37.158698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.434 qpair failed and we were unable to recover it. 00:26:14.434 [2024-04-24 21:41:37.168526] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.434 [2024-04-24 21:41:37.168646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.434 [2024-04-24 21:41:37.168665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.434 [2024-04-24 21:41:37.168675] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.434 [2024-04-24 21:41:37.168683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.434 [2024-04-24 21:41:37.168702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.434 qpair failed and we were unable to recover it. 00:26:14.434 [2024-04-24 21:41:37.178552] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.434 [2024-04-24 21:41:37.178677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.434 [2024-04-24 21:41:37.178696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.434 [2024-04-24 21:41:37.178706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.434 [2024-04-24 21:41:37.178714] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.434 [2024-04-24 21:41:37.178732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.434 qpair failed and we were unable to recover it. 00:26:14.434 [2024-04-24 21:41:37.188560] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.434 [2024-04-24 21:41:37.188688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.434 [2024-04-24 21:41:37.188707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.434 [2024-04-24 21:41:37.188717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.434 [2024-04-24 21:41:37.188726] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.434 [2024-04-24 21:41:37.188744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.434 qpair failed and we were unable to recover it. 00:26:14.434 [2024-04-24 21:41:37.198616] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.434 [2024-04-24 21:41:37.198744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.434 [2024-04-24 21:41:37.198762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.434 [2024-04-24 21:41:37.198772] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.434 [2024-04-24 21:41:37.198781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.434 [2024-04-24 21:41:37.198799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.434 qpair failed and we were unable to recover it. 00:26:14.434 [2024-04-24 21:41:37.208644] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.434 [2024-04-24 21:41:37.208771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.434 [2024-04-24 21:41:37.208790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.434 [2024-04-24 21:41:37.208800] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.434 [2024-04-24 21:41:37.208809] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.434 [2024-04-24 21:41:37.208828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.434 qpair failed and we were unable to recover it. 00:26:14.434 [2024-04-24 21:41:37.218600] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.434 [2024-04-24 21:41:37.218723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.434 [2024-04-24 21:41:37.218742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.434 [2024-04-24 21:41:37.218753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.434 [2024-04-24 21:41:37.218761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.434 [2024-04-24 21:41:37.218780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.434 qpair failed and we were unable to recover it. 00:26:14.434 [2024-04-24 21:41:37.228698] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.434 [2024-04-24 21:41:37.228822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.434 [2024-04-24 21:41:37.228841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.434 [2024-04-24 21:41:37.228854] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.434 [2024-04-24 21:41:37.228862] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.434 [2024-04-24 21:41:37.228881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.434 qpair failed and we were unable to recover it. 00:26:14.434 [2024-04-24 21:41:37.238727] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.434 [2024-04-24 21:41:37.238858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.434 [2024-04-24 21:41:37.238877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.434 [2024-04-24 21:41:37.238887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.434 [2024-04-24 21:41:37.238896] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.434 [2024-04-24 21:41:37.238914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.434 qpair failed and we were unable to recover it. 00:26:14.434 [2024-04-24 21:41:37.248773] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.434 [2024-04-24 21:41:37.248899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.434 [2024-04-24 21:41:37.248918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.434 [2024-04-24 21:41:37.248928] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.434 [2024-04-24 21:41:37.248936] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.434 [2024-04-24 21:41:37.248955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.434 qpair failed and we were unable to recover it. 00:26:14.434 [2024-04-24 21:41:37.258847] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.434 [2024-04-24 21:41:37.258973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.434 [2024-04-24 21:41:37.258992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.434 [2024-04-24 21:41:37.259001] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.434 [2024-04-24 21:41:37.259010] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.435 [2024-04-24 21:41:37.259028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.435 qpair failed and we were unable to recover it. 00:26:14.435 [2024-04-24 21:41:37.268744] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.435 [2024-04-24 21:41:37.268870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.435 [2024-04-24 21:41:37.268889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.435 [2024-04-24 21:41:37.268899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.435 [2024-04-24 21:41:37.268908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.435 [2024-04-24 21:41:37.268926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.435 qpair failed and we were unable to recover it. 00:26:14.435 [2024-04-24 21:41:37.278858] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.435 [2024-04-24 21:41:37.279021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.435 [2024-04-24 21:41:37.279040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.435 [2024-04-24 21:41:37.279050] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.435 [2024-04-24 21:41:37.279059] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.435 [2024-04-24 21:41:37.279076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.435 qpair failed and we were unable to recover it. 00:26:14.435 [2024-04-24 21:41:37.288854] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.435 [2024-04-24 21:41:37.288978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.435 [2024-04-24 21:41:37.288997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.435 [2024-04-24 21:41:37.289008] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.435 [2024-04-24 21:41:37.289016] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.435 [2024-04-24 21:41:37.289035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.435 qpair failed and we were unable to recover it. 00:26:14.435 [2024-04-24 21:41:37.298896] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.435 [2024-04-24 21:41:37.299021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.435 [2024-04-24 21:41:37.299040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.435 [2024-04-24 21:41:37.299051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.435 [2024-04-24 21:41:37.299060] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.435 [2024-04-24 21:41:37.299078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.435 qpair failed and we were unable to recover it. 00:26:14.435 [2024-04-24 21:41:37.308932] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.435 [2024-04-24 21:41:37.309058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.435 [2024-04-24 21:41:37.309077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.435 [2024-04-24 21:41:37.309086] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.435 [2024-04-24 21:41:37.309095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.435 [2024-04-24 21:41:37.309114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.435 qpair failed and we were unable to recover it. 00:26:14.435 [2024-04-24 21:41:37.318980] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.695 [2024-04-24 21:41:37.319106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.695 [2024-04-24 21:41:37.319126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.695 [2024-04-24 21:41:37.319138] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.695 [2024-04-24 21:41:37.319147] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.695 [2024-04-24 21:41:37.319165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.695 qpair failed and we were unable to recover it. 00:26:14.695 [2024-04-24 21:41:37.328986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.695 [2024-04-24 21:41:37.329110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.695 [2024-04-24 21:41:37.329129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.695 [2024-04-24 21:41:37.329139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.695 [2024-04-24 21:41:37.329148] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.695 [2024-04-24 21:41:37.329166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.695 qpair failed and we were unable to recover it. 00:26:14.695 [2024-04-24 21:41:37.339050] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.695 [2024-04-24 21:41:37.339189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.695 [2024-04-24 21:41:37.339208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.695 [2024-04-24 21:41:37.339218] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.695 [2024-04-24 21:41:37.339227] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.695 [2024-04-24 21:41:37.339245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.695 qpair failed and we were unable to recover it. 00:26:14.695 [2024-04-24 21:41:37.349046] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.695 [2024-04-24 21:41:37.349170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.695 [2024-04-24 21:41:37.349189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.695 [2024-04-24 21:41:37.349199] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.695 [2024-04-24 21:41:37.349208] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.695 [2024-04-24 21:41:37.349226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.695 qpair failed and we were unable to recover it. 00:26:14.695 [2024-04-24 21:41:37.359104] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.695 [2024-04-24 21:41:37.359230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.695 [2024-04-24 21:41:37.359249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.695 [2024-04-24 21:41:37.359259] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.695 [2024-04-24 21:41:37.359267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.695 [2024-04-24 21:41:37.359286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.695 qpair failed and we were unable to recover it. 00:26:14.695 [2024-04-24 21:41:37.369156] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.695 [2024-04-24 21:41:37.369285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.695 [2024-04-24 21:41:37.369304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.695 [2024-04-24 21:41:37.369314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.696 [2024-04-24 21:41:37.369323] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.696 [2024-04-24 21:41:37.369341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.696 qpair failed and we were unable to recover it. 00:26:14.696 [2024-04-24 21:41:37.379120] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.696 [2024-04-24 21:41:37.379249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.696 [2024-04-24 21:41:37.379269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.696 [2024-04-24 21:41:37.379280] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.696 [2024-04-24 21:41:37.379289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.696 [2024-04-24 21:41:37.379308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.696 qpair failed and we were unable to recover it. 00:26:14.696 [2024-04-24 21:41:37.389153] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.696 [2024-04-24 21:41:37.389278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.696 [2024-04-24 21:41:37.389297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.696 [2024-04-24 21:41:37.389307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.696 [2024-04-24 21:41:37.389315] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.696 [2024-04-24 21:41:37.389334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.696 qpair failed and we were unable to recover it. 00:26:14.696 [2024-04-24 21:41:37.399167] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.696 [2024-04-24 21:41:37.399324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.696 [2024-04-24 21:41:37.399343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.696 [2024-04-24 21:41:37.399353] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.696 [2024-04-24 21:41:37.399362] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.696 [2024-04-24 21:41:37.399380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.696 qpair failed and we were unable to recover it. 00:26:14.696 [2024-04-24 21:41:37.409182] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.696 [2024-04-24 21:41:37.409310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.696 [2024-04-24 21:41:37.409331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.696 [2024-04-24 21:41:37.409341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.696 [2024-04-24 21:41:37.409350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.696 [2024-04-24 21:41:37.409370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.696 qpair failed and we were unable to recover it. 00:26:14.696 [2024-04-24 21:41:37.419226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.696 [2024-04-24 21:41:37.419350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.696 [2024-04-24 21:41:37.419368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.696 [2024-04-24 21:41:37.419378] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.696 [2024-04-24 21:41:37.419387] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.696 [2024-04-24 21:41:37.419405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.696 qpair failed and we were unable to recover it. 00:26:14.696 [2024-04-24 21:41:37.429454] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.696 [2024-04-24 21:41:37.429676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.696 [2024-04-24 21:41:37.429697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.696 [2024-04-24 21:41:37.429707] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.696 [2024-04-24 21:41:37.429716] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.696 [2024-04-24 21:41:37.429736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.696 qpair failed and we were unable to recover it. 00:26:14.696 [2024-04-24 21:41:37.439291] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.696 [2024-04-24 21:41:37.439414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.696 [2024-04-24 21:41:37.439433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.696 [2024-04-24 21:41:37.439443] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.696 [2024-04-24 21:41:37.439458] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.696 [2024-04-24 21:41:37.439476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.696 qpair failed and we were unable to recover it. 00:26:14.696 [2024-04-24 21:41:37.449312] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.696 [2024-04-24 21:41:37.449457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.696 [2024-04-24 21:41:37.449476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.696 [2024-04-24 21:41:37.449486] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.696 [2024-04-24 21:41:37.449494] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.696 [2024-04-24 21:41:37.449516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.696 qpair failed and we were unable to recover it. 00:26:14.696 [2024-04-24 21:41:37.459294] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.696 [2024-04-24 21:41:37.459415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.696 [2024-04-24 21:41:37.459433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.696 [2024-04-24 21:41:37.459443] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.696 [2024-04-24 21:41:37.459457] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.696 [2024-04-24 21:41:37.459476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.696 qpair failed and we were unable to recover it. 00:26:14.696 [2024-04-24 21:41:37.469284] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.696 [2024-04-24 21:41:37.469415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.696 [2024-04-24 21:41:37.469434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.696 [2024-04-24 21:41:37.469444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.696 [2024-04-24 21:41:37.469457] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.696 [2024-04-24 21:41:37.469476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.696 qpair failed and we were unable to recover it. 00:26:14.696 [2024-04-24 21:41:37.479407] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.696 [2024-04-24 21:41:37.479571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.696 [2024-04-24 21:41:37.479592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.696 [2024-04-24 21:41:37.479602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.696 [2024-04-24 21:41:37.479612] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.696 [2024-04-24 21:41:37.479630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.696 qpair failed and we were unable to recover it. 00:26:14.696 [2024-04-24 21:41:37.489407] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.696 [2024-04-24 21:41:37.489534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.696 [2024-04-24 21:41:37.489554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.696 [2024-04-24 21:41:37.489564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.696 [2024-04-24 21:41:37.489572] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.696 [2024-04-24 21:41:37.489591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.696 qpair failed and we were unable to recover it. 00:26:14.696 [2024-04-24 21:41:37.499448] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.696 [2024-04-24 21:41:37.499614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.696 [2024-04-24 21:41:37.499635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.696 [2024-04-24 21:41:37.499645] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.696 [2024-04-24 21:41:37.499654] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.697 [2024-04-24 21:41:37.499672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.697 qpair failed and we were unable to recover it. 00:26:14.697 [2024-04-24 21:41:37.509485] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.697 [2024-04-24 21:41:37.509643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.697 [2024-04-24 21:41:37.509661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.697 [2024-04-24 21:41:37.509671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.697 [2024-04-24 21:41:37.509679] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.697 [2024-04-24 21:41:37.509699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.697 qpair failed and we were unable to recover it. 00:26:14.697 [2024-04-24 21:41:37.519559] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.697 [2024-04-24 21:41:37.519684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.697 [2024-04-24 21:41:37.519703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.697 [2024-04-24 21:41:37.519713] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.697 [2024-04-24 21:41:37.519722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.697 [2024-04-24 21:41:37.519740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.697 qpair failed and we were unable to recover it. 00:26:14.697 [2024-04-24 21:41:37.529552] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.697 [2024-04-24 21:41:37.529676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.697 [2024-04-24 21:41:37.529695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.697 [2024-04-24 21:41:37.529705] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.697 [2024-04-24 21:41:37.529713] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.697 [2024-04-24 21:41:37.529731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.697 qpair failed and we were unable to recover it. 00:26:14.697 [2024-04-24 21:41:37.539488] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.697 [2024-04-24 21:41:37.539612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.697 [2024-04-24 21:41:37.539631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.697 [2024-04-24 21:41:37.539641] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.697 [2024-04-24 21:41:37.539650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.697 [2024-04-24 21:41:37.539672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.697 qpair failed and we were unable to recover it. 00:26:14.697 [2024-04-24 21:41:37.549604] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.697 [2024-04-24 21:41:37.549730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.697 [2024-04-24 21:41:37.549748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.697 [2024-04-24 21:41:37.549758] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.697 [2024-04-24 21:41:37.549767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.697 [2024-04-24 21:41:37.549785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.697 qpair failed and we were unable to recover it. 00:26:14.697 [2024-04-24 21:41:37.559801] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.697 [2024-04-24 21:41:37.559927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.697 [2024-04-24 21:41:37.559946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.697 [2024-04-24 21:41:37.559956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.697 [2024-04-24 21:41:37.559965] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.697 [2024-04-24 21:41:37.559983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.697 qpair failed and we were unable to recover it. 00:26:14.697 [2024-04-24 21:41:37.569768] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.697 [2024-04-24 21:41:37.569932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.697 [2024-04-24 21:41:37.569951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.697 [2024-04-24 21:41:37.569960] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.697 [2024-04-24 21:41:37.569969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.697 [2024-04-24 21:41:37.569988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.697 qpair failed and we were unable to recover it. 00:26:14.697 [2024-04-24 21:41:37.579612] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.697 [2024-04-24 21:41:37.579743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.697 [2024-04-24 21:41:37.579763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.697 [2024-04-24 21:41:37.579774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.697 [2024-04-24 21:41:37.579782] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.697 [2024-04-24 21:41:37.579800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.697 qpair failed and we were unable to recover it. 00:26:14.956 [2024-04-24 21:41:37.589692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.956 [2024-04-24 21:41:37.589816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.956 [2024-04-24 21:41:37.589838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.956 [2024-04-24 21:41:37.589849] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.956 [2024-04-24 21:41:37.589857] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.956 [2024-04-24 21:41:37.589876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.956 qpair failed and we were unable to recover it. 00:26:14.956 [2024-04-24 21:41:37.599746] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.956 [2024-04-24 21:41:37.599869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.956 [2024-04-24 21:41:37.599888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.956 [2024-04-24 21:41:37.599898] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.956 [2024-04-24 21:41:37.599907] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b03730 00:26:14.956 [2024-04-24 21:41:37.599925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:14.956 qpair failed and we were unable to recover it. 00:26:14.956 [2024-04-24 21:41:37.609799] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.956 [2024-04-24 21:41:37.609968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.956 [2024-04-24 21:41:37.610000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.956 [2024-04-24 21:41:37.610016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.956 [2024-04-24 21:41:37.610029] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b7c000b90 00:26:14.956 [2024-04-24 21:41:37.610057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.956 qpair failed and we were unable to recover it. 00:26:14.956 [2024-04-24 21:41:37.619788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.956 [2024-04-24 21:41:37.619940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.956 [2024-04-24 21:41:37.619959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.956 [2024-04-24 21:41:37.619969] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.956 [2024-04-24 21:41:37.619978] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b7c000b90 00:26:14.956 [2024-04-24 21:41:37.619997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:14.956 qpair failed and we were unable to recover it. 00:26:14.956 [2024-04-24 21:41:37.629853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.956 [2024-04-24 21:41:37.630026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.956 [2024-04-24 21:41:37.630058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.956 [2024-04-24 21:41:37.630073] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.956 [2024-04-24 21:41:37.630091] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:14.956 [2024-04-24 21:41:37.630120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:14.956 qpair failed and we were unable to recover it. 00:26:14.956 [2024-04-24 21:41:37.639840] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.956 [2024-04-24 21:41:37.639965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.956 [2024-04-24 21:41:37.639984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.956 [2024-04-24 21:41:37.639994] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.956 [2024-04-24 21:41:37.640003] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b8c000b90 00:26:14.956 [2024-04-24 21:41:37.640023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:14.956 qpair failed and we were unable to recover it. 00:26:14.956 [2024-04-24 21:41:37.640166] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:26:14.956 A controller has encountered a failure and is being reset. 00:26:14.956 [2024-04-24 21:41:37.649851] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.956 [2024-04-24 21:41:37.649980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.956 [2024-04-24 21:41:37.650002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.956 [2024-04-24 21:41:37.650014] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.956 [2024-04-24 21:41:37.650023] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b84000b90 00:26:14.956 [2024-04-24 21:41:37.650045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.956 qpair failed and we were unable to recover it. 00:26:14.956 [2024-04-24 21:41:37.659852] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:14.956 [2024-04-24 21:41:37.659976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:14.956 [2024-04-24 21:41:37.659995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:14.956 [2024-04-24 21:41:37.660005] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:14.956 [2024-04-24 21:41:37.660014] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1b84000b90 00:26:14.956 [2024-04-24 21:41:37.660035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:14.956 qpair failed and we were unable to recover it. 00:26:14.956 Controller properly reset. 00:26:14.956 Initializing NVMe Controllers 00:26:14.956 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:14.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:14.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:14.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:14.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:14.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:14.956 Initialization complete. Launching workers. 00:26:14.956 Starting thread on core 1 00:26:14.956 Starting thread on core 2 00:26:14.956 Starting thread on core 3 00:26:14.956 Starting thread on core 0 00:26:14.956 21:41:37 -- host/target_disconnect.sh@59 -- # sync 00:26:14.956 00:26:14.956 real 0m11.410s 00:26:14.956 user 0m20.328s 00:26:14.956 sys 0m4.809s 00:26:14.956 21:41:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:14.956 21:41:37 -- common/autotest_common.sh@10 -- # set +x 00:26:14.956 ************************************ 00:26:14.956 END TEST nvmf_target_disconnect_tc2 00:26:14.956 ************************************ 00:26:15.214 21:41:37 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:26:15.214 21:41:37 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:15.214 21:41:37 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:26:15.214 21:41:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:15.214 21:41:37 -- nvmf/common.sh@117 -- # sync 00:26:15.214 21:41:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:15.214 21:41:37 -- nvmf/common.sh@120 -- # set +e 00:26:15.214 21:41:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:15.214 21:41:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:15.214 rmmod nvme_tcp 00:26:15.214 rmmod nvme_fabrics 00:26:15.214 rmmod nvme_keyring 00:26:15.214 21:41:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:15.214 21:41:37 -- nvmf/common.sh@124 -- # set -e 00:26:15.214 21:41:37 -- nvmf/common.sh@125 -- # return 0 00:26:15.214 21:41:37 -- nvmf/common.sh@478 -- # '[' -n 3004144 ']' 00:26:15.215 21:41:37 -- nvmf/common.sh@479 -- # killprocess 3004144 00:26:15.215 21:41:37 -- common/autotest_common.sh@936 -- # '[' -z 3004144 ']' 00:26:15.215 21:41:37 -- common/autotest_common.sh@940 -- # kill -0 3004144 00:26:15.215 21:41:37 -- common/autotest_common.sh@941 -- # uname 00:26:15.215 21:41:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:15.215 21:41:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3004144 00:26:15.215 21:41:37 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:26:15.215 21:41:37 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:26:15.215 21:41:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3004144' 00:26:15.215 killing process with pid 3004144 00:26:15.215 21:41:37 -- common/autotest_common.sh@955 -- # kill 3004144 00:26:15.215 21:41:37 -- common/autotest_common.sh@960 -- # wait 3004144 00:26:15.473 21:41:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:15.473 21:41:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:15.473 21:41:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:15.473 21:41:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:15.473 21:41:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:15.473 21:41:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.473 21:41:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:15.473 21:41:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.374 21:41:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:17.632 00:26:17.632 real 0m21.444s 00:26:17.632 user 0m48.310s 00:26:17.632 sys 0m10.738s 00:26:17.632 21:41:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:17.632 21:41:40 -- common/autotest_common.sh@10 -- # set +x 00:26:17.632 ************************************ 00:26:17.632 END TEST nvmf_target_disconnect 00:26:17.632 ************************************ 00:26:17.632 21:41:40 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:26:17.632 21:41:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:17.632 21:41:40 -- common/autotest_common.sh@10 -- # set +x 00:26:17.632 21:41:40 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:26:17.632 00:26:17.632 real 19m35.449s 00:26:17.632 user 39m9.737s 00:26:17.632 sys 7m23.790s 00:26:17.632 21:41:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:17.632 21:41:40 -- common/autotest_common.sh@10 -- # set +x 00:26:17.632 ************************************ 00:26:17.632 END TEST nvmf_tcp 00:26:17.632 ************************************ 00:26:17.632 21:41:40 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:26:17.632 21:41:40 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:17.632 21:41:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:17.632 21:41:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:17.632 21:41:40 -- common/autotest_common.sh@10 -- # set +x 00:26:17.890 ************************************ 00:26:17.890 START TEST spdkcli_nvmf_tcp 00:26:17.890 ************************************ 00:26:17.890 21:41:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:17.890 * Looking for test storage... 00:26:17.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:17.890 21:41:40 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:17.890 21:41:40 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:17.890 21:41:40 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:17.890 21:41:40 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:17.890 21:41:40 -- nvmf/common.sh@7 -- # uname -s 00:26:17.890 21:41:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.890 21:41:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.890 21:41:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.890 21:41:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.890 21:41:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.890 21:41:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.890 21:41:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.890 21:41:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.890 21:41:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.890 21:41:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.890 21:41:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:17.890 21:41:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:17.890 21:41:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.890 21:41:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.890 21:41:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:17.890 21:41:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.890 21:41:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:17.890 21:41:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.890 21:41:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.890 21:41:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.890 21:41:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.890 21:41:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.890 21:41:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.890 21:41:40 -- paths/export.sh@5 -- # export PATH 00:26:17.890 21:41:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.890 21:41:40 -- nvmf/common.sh@47 -- # : 0 00:26:17.890 21:41:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:17.890 21:41:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:17.890 21:41:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:17.890 21:41:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.890 21:41:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.890 21:41:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:17.890 21:41:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:17.890 21:41:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:17.890 21:41:40 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:17.890 21:41:40 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:17.890 21:41:40 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:17.890 21:41:40 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:17.890 21:41:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:17.890 21:41:40 -- common/autotest_common.sh@10 -- # set +x 00:26:17.890 21:41:40 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:17.890 21:41:40 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3005757 00:26:17.890 21:41:40 -- spdkcli/common.sh@34 -- # waitforlisten 3005757 00:26:17.890 21:41:40 -- common/autotest_common.sh@817 -- # '[' -z 3005757 ']' 00:26:17.890 21:41:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.890 21:41:40 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:17.890 21:41:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:17.890 21:41:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.890 21:41:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:17.890 21:41:40 -- common/autotest_common.sh@10 -- # set +x 00:26:17.890 [2024-04-24 21:41:40.761939] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:26:17.890 [2024-04-24 21:41:40.761995] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3005757 ] 00:26:18.148 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.148 [2024-04-24 21:41:40.834017] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:18.148 [2024-04-24 21:41:40.909569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.148 [2024-04-24 21:41:40.909572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.713 21:41:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:18.713 21:41:41 -- common/autotest_common.sh@850 -- # return 0 00:26:18.713 21:41:41 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:18.713 21:41:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:18.713 21:41:41 -- common/autotest_common.sh@10 -- # set +x 00:26:18.713 21:41:41 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:18.713 21:41:41 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:18.713 21:41:41 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:18.713 21:41:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:18.713 21:41:41 -- common/autotest_common.sh@10 -- # set +x 00:26:18.713 21:41:41 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:18.713 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:18.713 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:18.713 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:18.713 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:18.713 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:18.713 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:18.713 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:18.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:18.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:18.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:18.713 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:18.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:18.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:18.713 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:18.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:18.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:18.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:18.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:18.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:18.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:18.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:18.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:18.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:18.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:18.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:18.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:18.713 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:18.713 ' 00:26:19.278 [2024-04-24 21:41:41.927054] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:21.181 [2024-04-24 21:41:43.966681] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.554 [2024-04-24 21:41:45.142631] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:24.451 [2024-04-24 21:41:47.305114] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:26.354 [2024-04-24 21:41:49.166926] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:27.723 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:27.723 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:27.723 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:27.723 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:27.723 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:27.723 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:27.724 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:27.724 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:27.724 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:27.724 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:27.724 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:27.724 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:27.724 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:27.724 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:27.724 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:27.724 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:27.724 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:27.724 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:27.724 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:27.724 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:27.724 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:27.724 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:27.724 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:27.724 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:27.724 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:27.724 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:27.724 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:27.724 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:27.981 21:41:50 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:27.981 21:41:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:27.981 21:41:50 -- common/autotest_common.sh@10 -- # set +x 00:26:27.981 21:41:50 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:27.981 21:41:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:27.981 21:41:50 -- common/autotest_common.sh@10 -- # set +x 00:26:27.981 21:41:50 -- spdkcli/nvmf.sh@69 -- # check_match 00:26:27.981 21:41:50 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:28.238 21:41:51 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:28.495 21:41:51 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:28.495 21:41:51 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:28.495 21:41:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:28.495 21:41:51 -- common/autotest_common.sh@10 -- # set +x 00:26:28.495 21:41:51 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:28.495 21:41:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:28.495 21:41:51 -- common/autotest_common.sh@10 -- # set +x 00:26:28.495 21:41:51 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:28.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:28.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:28.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:28.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:28.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:28.495 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:28.495 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:28.495 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:28.495 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:28.495 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:28.495 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:28.495 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:28.495 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:28.495 ' 00:26:33.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:33.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:33.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:33.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:33.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:33.758 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:33.758 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:33.758 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:33.758 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:33.758 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:33.758 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:33.758 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:33.758 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:33.758 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:33.758 21:41:56 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:33.758 21:41:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:33.758 21:41:56 -- common/autotest_common.sh@10 -- # set +x 00:26:33.758 21:41:56 -- spdkcli/nvmf.sh@90 -- # killprocess 3005757 00:26:33.759 21:41:56 -- common/autotest_common.sh@936 -- # '[' -z 3005757 ']' 00:26:33.759 21:41:56 -- common/autotest_common.sh@940 -- # kill -0 3005757 00:26:33.759 21:41:56 -- common/autotest_common.sh@941 -- # uname 00:26:33.759 21:41:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:33.759 21:41:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3005757 00:26:33.759 21:41:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:33.759 21:41:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:33.759 21:41:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3005757' 00:26:33.759 killing process with pid 3005757 00:26:33.759 21:41:56 -- common/autotest_common.sh@955 -- # kill 3005757 00:26:33.759 [2024-04-24 21:41:56.230481] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:33.759 21:41:56 -- common/autotest_common.sh@960 -- # wait 3005757 00:26:33.759 21:41:56 -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:33.759 21:41:56 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:33.759 21:41:56 -- spdkcli/common.sh@13 -- # '[' -n 3005757 ']' 00:26:33.759 21:41:56 -- spdkcli/common.sh@14 -- # killprocess 3005757 00:26:33.759 21:41:56 -- common/autotest_common.sh@936 -- # '[' -z 3005757 ']' 00:26:33.759 21:41:56 -- common/autotest_common.sh@940 -- # kill -0 3005757 00:26:33.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3005757) - No such process 00:26:33.759 21:41:56 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3005757 is not found' 00:26:33.759 Process with pid 3005757 is not found 00:26:33.759 21:41:56 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:33.759 21:41:56 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:33.759 21:41:56 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:33.759 00:26:33.759 real 0m15.869s 00:26:33.759 user 0m32.655s 00:26:33.759 sys 0m0.832s 00:26:33.759 21:41:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:33.759 21:41:56 -- common/autotest_common.sh@10 -- # set +x 00:26:33.759 ************************************ 00:26:33.759 END TEST spdkcli_nvmf_tcp 00:26:33.759 ************************************ 00:26:33.759 21:41:56 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:33.759 21:41:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:33.759 21:41:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:33.759 21:41:56 -- common/autotest_common.sh@10 -- # set +x 00:26:33.759 ************************************ 00:26:34.017 START TEST nvmf_identify_passthru 00:26:34.017 ************************************ 00:26:34.017 21:41:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:34.017 * Looking for test storage... 00:26:34.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:34.017 21:41:56 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:34.017 21:41:56 -- nvmf/common.sh@7 -- # uname -s 00:26:34.017 21:41:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.017 21:41:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.017 21:41:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.017 21:41:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.017 21:41:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:34.017 21:41:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:34.017 21:41:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.017 21:41:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:34.017 21:41:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.017 21:41:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:34.017 21:41:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:34.017 21:41:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:34.017 21:41:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.017 21:41:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:34.017 21:41:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:34.017 21:41:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:34.017 21:41:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:34.017 21:41:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.017 21:41:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.017 21:41:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.017 21:41:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.017 21:41:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.017 21:41:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.017 21:41:56 -- paths/export.sh@5 -- # export PATH 00:26:34.017 21:41:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.017 21:41:56 -- nvmf/common.sh@47 -- # : 0 00:26:34.017 21:41:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:34.017 21:41:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:34.017 21:41:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:34.017 21:41:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.017 21:41:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.017 21:41:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:34.017 21:41:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:34.017 21:41:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:34.017 21:41:56 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:34.017 21:41:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.017 21:41:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.017 21:41:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.017 21:41:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.017 21:41:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.018 21:41:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.018 21:41:56 -- paths/export.sh@5 -- # export PATH 00:26:34.018 21:41:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.018 21:41:56 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:34.018 21:41:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:34.018 21:41:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.018 21:41:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:34.018 21:41:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:34.018 21:41:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:34.018 21:41:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.018 21:41:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:34.018 21:41:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.018 21:41:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:34.018 21:41:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:34.018 21:41:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:34.018 21:41:56 -- common/autotest_common.sh@10 -- # set +x 00:26:40.578 21:42:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:40.578 21:42:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:40.578 21:42:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:40.578 21:42:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:40.578 21:42:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:40.578 21:42:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:40.578 21:42:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:40.578 21:42:03 -- nvmf/common.sh@295 -- # net_devs=() 00:26:40.578 21:42:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:40.578 21:42:03 -- nvmf/common.sh@296 -- # e810=() 00:26:40.578 21:42:03 -- nvmf/common.sh@296 -- # local -ga e810 00:26:40.578 21:42:03 -- nvmf/common.sh@297 -- # x722=() 00:26:40.578 21:42:03 -- nvmf/common.sh@297 -- # local -ga x722 00:26:40.578 21:42:03 -- nvmf/common.sh@298 -- # mlx=() 00:26:40.578 21:42:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:40.578 21:42:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.578 21:42:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.578 21:42:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.579 21:42:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.579 21:42:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.579 21:42:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.579 21:42:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.579 21:42:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.579 21:42:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.579 21:42:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.579 21:42:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.579 21:42:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:40.579 21:42:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:40.579 21:42:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:40.579 21:42:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:40.579 21:42:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:40.579 21:42:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:40.579 21:42:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.579 21:42:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:40.579 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:40.579 21:42:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.579 21:42:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.579 21:42:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.579 21:42:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.579 21:42:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.579 21:42:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.579 21:42:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:40.579 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:40.579 21:42:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.579 21:42:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.579 21:42:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.579 21:42:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.579 21:42:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.579 21:42:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:40.579 21:42:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:40.579 21:42:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:40.579 21:42:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.579 21:42:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.579 21:42:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:40.579 21:42:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.579 21:42:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:40.579 Found net devices under 0000:af:00.0: cvl_0_0 00:26:40.579 21:42:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.579 21:42:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.579 21:42:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.579 21:42:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:40.579 21:42:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.579 21:42:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:40.579 Found net devices under 0000:af:00.1: cvl_0_1 00:26:40.579 21:42:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.579 21:42:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:40.579 21:42:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:40.579 21:42:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:40.579 21:42:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:40.579 21:42:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:40.579 21:42:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.579 21:42:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.579 21:42:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.579 21:42:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:40.579 21:42:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.579 21:42:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.579 21:42:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:40.579 21:42:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.579 21:42:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.579 21:42:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:40.579 21:42:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:40.579 21:42:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.579 21:42:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.579 21:42:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.579 21:42:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.579 21:42:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:40.579 21:42:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.579 21:42:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.579 21:42:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.579 21:42:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:40.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:26:40.579 00:26:40.579 --- 10.0.0.2 ping statistics --- 00:26:40.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.579 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:26:40.579 21:42:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:26:40.579 00:26:40.579 --- 10.0.0.1 ping statistics --- 00:26:40.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.579 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:26:40.579 21:42:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.579 21:42:03 -- nvmf/common.sh@411 -- # return 0 00:26:40.579 21:42:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:40.579 21:42:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.579 21:42:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:40.579 21:42:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:40.579 21:42:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.579 21:42:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:40.579 21:42:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:40.579 21:42:03 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:40.579 21:42:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:40.579 21:42:03 -- common/autotest_common.sh@10 -- # set +x 00:26:40.579 21:42:03 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:40.579 21:42:03 -- common/autotest_common.sh@1510 -- # bdfs=() 00:26:40.579 21:42:03 -- common/autotest_common.sh@1510 -- # local bdfs 00:26:40.579 21:42:03 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:26:40.579 21:42:03 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:26:40.579 21:42:03 -- common/autotest_common.sh@1499 -- # bdfs=() 00:26:40.579 21:42:03 -- common/autotest_common.sh@1499 -- # local bdfs 00:26:40.579 21:42:03 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:40.579 21:42:03 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:40.579 21:42:03 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:26:40.838 21:42:03 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:26:40.838 21:42:03 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:d8:00.0 00:26:40.838 21:42:03 -- common/autotest_common.sh@1513 -- # echo 0000:d8:00.0 00:26:40.838 21:42:03 -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:26:40.838 21:42:03 -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:26:40.838 21:42:03 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:26:40.838 21:42:03 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:40.838 21:42:03 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:40.838 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.137 21:42:08 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLN916500W71P6AGN 00:26:46.137 21:42:08 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:26:46.137 21:42:08 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:46.137 21:42:08 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:46.137 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.327 21:42:13 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:26:50.327 21:42:13 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:50.327 21:42:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:50.327 21:42:13 -- common/autotest_common.sh@10 -- # set +x 00:26:50.327 21:42:13 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:50.327 21:42:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:50.327 21:42:13 -- common/autotest_common.sh@10 -- # set +x 00:26:50.327 21:42:13 -- target/identify_passthru.sh@31 -- # nvmfpid=3013852 00:26:50.327 21:42:13 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:50.327 21:42:13 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:50.327 21:42:13 -- target/identify_passthru.sh@35 -- # waitforlisten 3013852 00:26:50.327 21:42:13 -- common/autotest_common.sh@817 -- # '[' -z 3013852 ']' 00:26:50.327 21:42:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.327 21:42:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:50.327 21:42:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.327 21:42:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:50.327 21:42:13 -- common/autotest_common.sh@10 -- # set +x 00:26:50.327 [2024-04-24 21:42:13.173635] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:26:50.327 [2024-04-24 21:42:13.173688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.327 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.602 [2024-04-24 21:42:13.247715] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:50.602 [2024-04-24 21:42:13.315852] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.602 [2024-04-24 21:42:13.315896] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.602 [2024-04-24 21:42:13.315905] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.602 [2024-04-24 21:42:13.315913] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.602 [2024-04-24 21:42:13.315936] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.602 [2024-04-24 21:42:13.315989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.602 [2024-04-24 21:42:13.316081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.602 [2024-04-24 21:42:13.316169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:50.602 [2024-04-24 21:42:13.316171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.167 21:42:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:51.167 21:42:13 -- common/autotest_common.sh@850 -- # return 0 00:26:51.167 21:42:13 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:51.167 21:42:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.167 21:42:13 -- common/autotest_common.sh@10 -- # set +x 00:26:51.167 INFO: Log level set to 20 00:26:51.167 INFO: Requests: 00:26:51.167 { 00:26:51.167 "jsonrpc": "2.0", 00:26:51.167 "method": "nvmf_set_config", 00:26:51.167 "id": 1, 00:26:51.167 "params": { 00:26:51.167 "admin_cmd_passthru": { 00:26:51.167 "identify_ctrlr": true 00:26:51.167 } 00:26:51.167 } 00:26:51.167 } 00:26:51.167 00:26:51.167 INFO: response: 00:26:51.167 { 00:26:51.167 "jsonrpc": "2.0", 00:26:51.167 "id": 1, 00:26:51.167 "result": true 00:26:51.167 } 00:26:51.167 00:26:51.167 21:42:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.167 21:42:13 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:51.167 21:42:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.167 21:42:13 -- common/autotest_common.sh@10 -- # set +x 00:26:51.167 INFO: Setting log level to 20 00:26:51.167 INFO: Setting log level to 20 00:26:51.167 INFO: Log level set to 20 00:26:51.167 INFO: Log level set to 20 00:26:51.167 INFO: Requests: 00:26:51.167 { 00:26:51.167 "jsonrpc": "2.0", 00:26:51.167 "method": "framework_start_init", 00:26:51.167 "id": 1 00:26:51.167 } 00:26:51.167 00:26:51.167 INFO: Requests: 00:26:51.167 { 00:26:51.167 "jsonrpc": "2.0", 00:26:51.167 "method": "framework_start_init", 00:26:51.167 "id": 1 00:26:51.167 } 00:26:51.167 00:26:51.424 [2024-04-24 21:42:14.065383] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:51.424 INFO: response: 00:26:51.424 { 00:26:51.424 "jsonrpc": "2.0", 00:26:51.424 "id": 1, 00:26:51.424 "result": true 00:26:51.424 } 00:26:51.424 00:26:51.424 INFO: response: 00:26:51.424 { 00:26:51.424 "jsonrpc": "2.0", 00:26:51.424 "id": 1, 00:26:51.424 "result": true 00:26:51.424 } 00:26:51.424 00:26:51.424 21:42:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.424 21:42:14 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:51.424 21:42:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.424 21:42:14 -- common/autotest_common.sh@10 -- # set +x 00:26:51.424 INFO: Setting log level to 40 00:26:51.424 INFO: Setting log level to 40 00:26:51.424 INFO: Setting log level to 40 00:26:51.424 [2024-04-24 21:42:14.078793] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.424 21:42:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.424 21:42:14 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:51.424 21:42:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:51.424 21:42:14 -- common/autotest_common.sh@10 -- # set +x 00:26:51.424 21:42:14 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:26:51.424 21:42:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.424 21:42:14 -- common/autotest_common.sh@10 -- # set +x 00:26:54.708 Nvme0n1 00:26:54.708 21:42:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.709 21:42:16 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:54.709 21:42:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.709 21:42:16 -- common/autotest_common.sh@10 -- # set +x 00:26:54.709 21:42:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.709 21:42:16 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:54.709 21:42:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.709 21:42:16 -- common/autotest_common.sh@10 -- # set +x 00:26:54.709 21:42:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.709 21:42:16 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:54.709 21:42:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.709 21:42:16 -- common/autotest_common.sh@10 -- # set +x 00:26:54.709 [2024-04-24 21:42:16.997785] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.709 21:42:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.709 21:42:17 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:54.709 21:42:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.709 21:42:17 -- common/autotest_common.sh@10 -- # set +x 00:26:54.709 [2024-04-24 21:42:17.005555] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:54.709 [ 00:26:54.709 { 00:26:54.709 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:54.709 "subtype": "Discovery", 00:26:54.709 "listen_addresses": [], 00:26:54.709 "allow_any_host": true, 00:26:54.709 "hosts": [] 00:26:54.709 }, 00:26:54.709 { 00:26:54.709 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:54.709 "subtype": "NVMe", 00:26:54.709 "listen_addresses": [ 00:26:54.709 { 00:26:54.709 "transport": "TCP", 00:26:54.709 "trtype": "TCP", 00:26:54.709 "adrfam": "IPv4", 00:26:54.709 "traddr": "10.0.0.2", 00:26:54.709 "trsvcid": "4420" 00:26:54.709 } 00:26:54.709 ], 00:26:54.709 "allow_any_host": true, 00:26:54.709 "hosts": [], 00:26:54.709 "serial_number": "SPDK00000000000001", 00:26:54.709 "model_number": "SPDK bdev Controller", 00:26:54.709 "max_namespaces": 1, 00:26:54.709 "min_cntlid": 1, 00:26:54.709 "max_cntlid": 65519, 00:26:54.709 "namespaces": [ 00:26:54.709 { 00:26:54.709 "nsid": 1, 00:26:54.709 "bdev_name": "Nvme0n1", 00:26:54.709 "name": "Nvme0n1", 00:26:54.709 "nguid": "4DFED13D25BD4A01AC0073D55163D3A2", 00:26:54.709 "uuid": "4dfed13d-25bd-4a01-ac00-73d55163d3a2" 00:26:54.709 } 00:26:54.709 ] 00:26:54.709 } 00:26:54.709 ] 00:26:54.709 21:42:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.709 21:42:17 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:54.709 21:42:17 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:54.709 21:42:17 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:54.709 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.709 21:42:17 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLN916500W71P6AGN 00:26:54.709 21:42:17 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:54.709 21:42:17 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:54.709 21:42:17 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:54.709 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.709 21:42:17 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:26:54.709 21:42:17 -- target/identify_passthru.sh@63 -- # '[' BTLN916500W71P6AGN '!=' BTLN916500W71P6AGN ']' 00:26:54.709 21:42:17 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:26:54.709 21:42:17 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:54.709 21:42:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.709 21:42:17 -- common/autotest_common.sh@10 -- # set +x 00:26:54.709 21:42:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.709 21:42:17 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:54.709 21:42:17 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:54.709 21:42:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:54.709 21:42:17 -- nvmf/common.sh@117 -- # sync 00:26:54.709 21:42:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:54.709 21:42:17 -- nvmf/common.sh@120 -- # set +e 00:26:54.709 21:42:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:54.709 21:42:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:54.709 rmmod nvme_tcp 00:26:54.709 rmmod nvme_fabrics 00:26:54.709 rmmod nvme_keyring 00:26:54.709 21:42:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:54.709 21:42:17 -- nvmf/common.sh@124 -- # set -e 00:26:54.709 21:42:17 -- nvmf/common.sh@125 -- # return 0 00:26:54.709 21:42:17 -- nvmf/common.sh@478 -- # '[' -n 3013852 ']' 00:26:54.709 21:42:17 -- nvmf/common.sh@479 -- # killprocess 3013852 00:26:54.709 21:42:17 -- common/autotest_common.sh@936 -- # '[' -z 3013852 ']' 00:26:54.709 21:42:17 -- common/autotest_common.sh@940 -- # kill -0 3013852 00:26:54.709 21:42:17 -- common/autotest_common.sh@941 -- # uname 00:26:54.709 21:42:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:54.709 21:42:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3013852 00:26:54.709 21:42:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:54.709 21:42:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:54.709 21:42:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3013852' 00:26:54.709 killing process with pid 3013852 00:26:54.709 21:42:17 -- common/autotest_common.sh@955 -- # kill 3013852 00:26:54.709 [2024-04-24 21:42:17.447376] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:54.709 21:42:17 -- common/autotest_common.sh@960 -- # wait 3013852 00:26:56.610 21:42:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:56.610 21:42:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:56.610 21:42:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:56.610 21:42:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:56.610 21:42:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:56.610 21:42:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.610 21:42:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:56.610 21:42:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.142 21:42:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:59.142 00:26:59.142 real 0m24.863s 00:26:59.142 user 0m32.870s 00:26:59.142 sys 0m6.407s 00:26:59.142 21:42:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:59.142 21:42:21 -- common/autotest_common.sh@10 -- # set +x 00:26:59.142 ************************************ 00:26:59.142 END TEST nvmf_identify_passthru 00:26:59.142 ************************************ 00:26:59.142 21:42:21 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:59.142 21:42:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:59.142 21:42:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:59.142 21:42:21 -- common/autotest_common.sh@10 -- # set +x 00:26:59.142 ************************************ 00:26:59.142 START TEST nvmf_dif 00:26:59.142 ************************************ 00:26:59.142 21:42:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:59.142 * Looking for test storage... 00:26:59.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:59.142 21:42:21 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:59.142 21:42:21 -- nvmf/common.sh@7 -- # uname -s 00:26:59.142 21:42:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:59.142 21:42:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:59.142 21:42:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:59.142 21:42:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:59.142 21:42:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:59.142 21:42:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:59.142 21:42:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:59.142 21:42:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:59.142 21:42:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:59.142 21:42:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:59.142 21:42:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:59.142 21:42:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:59.142 21:42:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:59.142 21:42:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:59.142 21:42:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:59.142 21:42:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:59.142 21:42:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:59.142 21:42:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.142 21:42:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.142 21:42:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.142 21:42:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.142 21:42:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.142 21:42:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.142 21:42:21 -- paths/export.sh@5 -- # export PATH 00:26:59.142 21:42:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.142 21:42:21 -- nvmf/common.sh@47 -- # : 0 00:26:59.142 21:42:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:59.142 21:42:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:59.142 21:42:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:59.142 21:42:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:59.142 21:42:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:59.142 21:42:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:59.142 21:42:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:59.142 21:42:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:59.142 21:42:21 -- target/dif.sh@15 -- # NULL_META=16 00:26:59.142 21:42:21 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:59.142 21:42:21 -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:59.142 21:42:21 -- target/dif.sh@15 -- # NULL_DIF=1 00:26:59.142 21:42:21 -- target/dif.sh@135 -- # nvmftestinit 00:26:59.142 21:42:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:59.142 21:42:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.142 21:42:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:59.142 21:42:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:59.142 21:42:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:59.142 21:42:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.142 21:42:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:59.142 21:42:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.142 21:42:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:59.142 21:42:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:59.142 21:42:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:59.142 21:42:21 -- common/autotest_common.sh@10 -- # set +x 00:27:05.771 21:42:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:05.771 21:42:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:05.771 21:42:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:05.771 21:42:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:05.771 21:42:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:05.771 21:42:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:05.771 21:42:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:05.771 21:42:28 -- nvmf/common.sh@295 -- # net_devs=() 00:27:05.771 21:42:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:05.771 21:42:28 -- nvmf/common.sh@296 -- # e810=() 00:27:05.771 21:42:28 -- nvmf/common.sh@296 -- # local -ga e810 00:27:05.771 21:42:28 -- nvmf/common.sh@297 -- # x722=() 00:27:05.771 21:42:28 -- nvmf/common.sh@297 -- # local -ga x722 00:27:05.771 21:42:28 -- nvmf/common.sh@298 -- # mlx=() 00:27:05.772 21:42:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:05.772 21:42:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.772 21:42:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.772 21:42:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.772 21:42:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.772 21:42:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.772 21:42:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.772 21:42:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.772 21:42:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.772 21:42:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.772 21:42:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.772 21:42:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.772 21:42:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:05.772 21:42:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:05.772 21:42:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:05.772 21:42:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:05.772 21:42:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:05.772 21:42:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:05.772 21:42:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.772 21:42:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:05.772 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:05.772 21:42:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.772 21:42:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.772 21:42:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.772 21:42:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.772 21:42:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.772 21:42:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.772 21:42:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:05.772 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:05.772 21:42:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.772 21:42:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.772 21:42:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.772 21:42:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.772 21:42:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.772 21:42:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:05.772 21:42:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:05.772 21:42:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:05.772 21:42:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.772 21:42:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.772 21:42:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:05.772 21:42:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.772 21:42:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:05.772 Found net devices under 0000:af:00.0: cvl_0_0 00:27:05.772 21:42:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.772 21:42:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.772 21:42:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.772 21:42:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:05.772 21:42:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.772 21:42:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:05.772 Found net devices under 0000:af:00.1: cvl_0_1 00:27:05.772 21:42:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.772 21:42:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:05.772 21:42:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:05.772 21:42:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:05.772 21:42:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:05.772 21:42:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:05.772 21:42:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.772 21:42:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.772 21:42:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.772 21:42:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:05.772 21:42:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.772 21:42:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.772 21:42:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:05.772 21:42:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.772 21:42:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.772 21:42:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:05.772 21:42:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:05.772 21:42:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.772 21:42:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.772 21:42:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.772 21:42:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.772 21:42:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:05.772 21:42:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.030 21:42:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.030 21:42:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.030 21:42:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:06.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:27:06.030 00:27:06.030 --- 10.0.0.2 ping statistics --- 00:27:06.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.030 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:27:06.030 21:42:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:27:06.030 00:27:06.030 --- 10.0.0.1 ping statistics --- 00:27:06.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.030 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:27:06.030 21:42:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.030 21:42:28 -- nvmf/common.sh@411 -- # return 0 00:27:06.030 21:42:28 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:27:06.030 21:42:28 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:09.315 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:09.315 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:09.315 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:09.315 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:09.315 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:09.315 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:09.315 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:09.315 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:09.315 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:09.315 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:09.315 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:09.315 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:09.315 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:09.315 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:09.315 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:09.315 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:09.315 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:09.315 21:42:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.315 21:42:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:09.315 21:42:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:09.315 21:42:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.315 21:42:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:09.315 21:42:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:09.315 21:42:32 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:09.315 21:42:32 -- target/dif.sh@137 -- # nvmfappstart 00:27:09.315 21:42:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:09.315 21:42:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:09.315 21:42:32 -- common/autotest_common.sh@10 -- # set +x 00:27:09.315 21:42:32 -- nvmf/common.sh@470 -- # nvmfpid=3019887 00:27:09.315 21:42:32 -- nvmf/common.sh@471 -- # waitforlisten 3019887 00:27:09.315 21:42:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:09.315 21:42:32 -- common/autotest_common.sh@817 -- # '[' -z 3019887 ']' 00:27:09.315 21:42:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.315 21:42:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:09.315 21:42:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.315 21:42:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:09.315 21:42:32 -- common/autotest_common.sh@10 -- # set +x 00:27:09.315 [2024-04-24 21:42:32.157013] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:27:09.315 [2024-04-24 21:42:32.157064] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.315 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.574 [2024-04-24 21:42:32.231850] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.574 [2024-04-24 21:42:32.304621] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.574 [2024-04-24 21:42:32.304655] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.574 [2024-04-24 21:42:32.304665] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.574 [2024-04-24 21:42:32.304673] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.574 [2024-04-24 21:42:32.304697] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.574 [2024-04-24 21:42:32.304720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.140 21:42:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:10.140 21:42:32 -- common/autotest_common.sh@850 -- # return 0 00:27:10.140 21:42:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:10.140 21:42:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:10.140 21:42:32 -- common/autotest_common.sh@10 -- # set +x 00:27:10.140 21:42:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.140 21:42:32 -- target/dif.sh@139 -- # create_transport 00:27:10.140 21:42:32 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:10.140 21:42:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.140 21:42:32 -- common/autotest_common.sh@10 -- # set +x 00:27:10.140 [2024-04-24 21:42:32.991219] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.140 21:42:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.140 21:42:32 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:10.140 21:42:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:10.140 21:42:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:10.140 21:42:32 -- common/autotest_common.sh@10 -- # set +x 00:27:10.399 ************************************ 00:27:10.399 START TEST fio_dif_1_default 00:27:10.399 ************************************ 00:27:10.399 21:42:33 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:27:10.399 21:42:33 -- target/dif.sh@86 -- # create_subsystems 0 00:27:10.399 21:42:33 -- target/dif.sh@28 -- # local sub 00:27:10.399 21:42:33 -- target/dif.sh@30 -- # for sub in "$@" 00:27:10.399 21:42:33 -- target/dif.sh@31 -- # create_subsystem 0 00:27:10.399 21:42:33 -- target/dif.sh@18 -- # local sub_id=0 00:27:10.399 21:42:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:10.399 21:42:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.399 21:42:33 -- common/autotest_common.sh@10 -- # set +x 00:27:10.399 bdev_null0 00:27:10.399 21:42:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.399 21:42:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:10.399 21:42:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.399 21:42:33 -- common/autotest_common.sh@10 -- # set +x 00:27:10.399 21:42:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.399 21:42:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:10.399 21:42:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.399 21:42:33 -- common/autotest_common.sh@10 -- # set +x 00:27:10.399 21:42:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.399 21:42:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:10.399 21:42:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.399 21:42:33 -- common/autotest_common.sh@10 -- # set +x 00:27:10.399 [2024-04-24 21:42:33.203919] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:10.399 21:42:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.399 21:42:33 -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:10.399 21:42:33 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:10.399 21:42:33 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:10.399 21:42:33 -- nvmf/common.sh@521 -- # config=() 00:27:10.399 21:42:33 -- nvmf/common.sh@521 -- # local subsystem config 00:27:10.399 21:42:33 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:10.399 21:42:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:10.399 21:42:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:10.399 { 00:27:10.399 "params": { 00:27:10.399 "name": "Nvme$subsystem", 00:27:10.399 "trtype": "$TEST_TRANSPORT", 00:27:10.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.399 "adrfam": "ipv4", 00:27:10.399 "trsvcid": "$NVMF_PORT", 00:27:10.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.399 "hdgst": ${hdgst:-false}, 00:27:10.399 "ddgst": ${ddgst:-false} 00:27:10.399 }, 00:27:10.399 "method": "bdev_nvme_attach_controller" 00:27:10.399 } 00:27:10.399 EOF 00:27:10.399 )") 00:27:10.399 21:42:33 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:10.399 21:42:33 -- target/dif.sh@82 -- # gen_fio_conf 00:27:10.399 21:42:33 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:10.399 21:42:33 -- target/dif.sh@54 -- # local file 00:27:10.399 21:42:33 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:10.399 21:42:33 -- target/dif.sh@56 -- # cat 00:27:10.399 21:42:33 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:10.399 21:42:33 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:10.399 21:42:33 -- common/autotest_common.sh@1327 -- # shift 00:27:10.399 21:42:33 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:10.399 21:42:33 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:10.399 21:42:33 -- nvmf/common.sh@543 -- # cat 00:27:10.399 21:42:33 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:10.399 21:42:33 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:10.399 21:42:33 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:10.399 21:42:33 -- target/dif.sh@72 -- # (( file <= files )) 00:27:10.399 21:42:33 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:10.399 21:42:33 -- nvmf/common.sh@545 -- # jq . 00:27:10.399 21:42:33 -- nvmf/common.sh@546 -- # IFS=, 00:27:10.399 21:42:33 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:10.399 "params": { 00:27:10.399 "name": "Nvme0", 00:27:10.399 "trtype": "tcp", 00:27:10.399 "traddr": "10.0.0.2", 00:27:10.399 "adrfam": "ipv4", 00:27:10.399 "trsvcid": "4420", 00:27:10.399 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:10.399 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:10.399 "hdgst": false, 00:27:10.399 "ddgst": false 00:27:10.399 }, 00:27:10.399 "method": "bdev_nvme_attach_controller" 00:27:10.399 }' 00:27:10.399 21:42:33 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:10.399 21:42:33 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:10.399 21:42:33 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:10.399 21:42:33 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:10.399 21:42:33 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:10.399 21:42:33 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:10.673 21:42:33 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:10.673 21:42:33 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:10.673 21:42:33 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:10.673 21:42:33 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:10.933 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:10.933 fio-3.35 00:27:10.933 Starting 1 thread 00:27:10.933 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.139 00:27:23.139 filename0: (groupid=0, jobs=1): err= 0: pid=3020486: Wed Apr 24 21:42:44 2024 00:27:23.139 read: IOPS=95, BW=380KiB/s (389kB/s)(3808KiB/10013msec) 00:27:23.139 slat (nsec): min=5625, max=36404, avg=5911.14, stdev=1408.56 00:27:23.139 clat (usec): min=41822, max=44947, avg=42054.73, stdev=304.58 00:27:23.139 lat (usec): min=41828, max=44971, avg=42060.64, stdev=304.93 00:27:23.139 clat percentiles (usec): 00:27:23.139 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:27:23.139 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:27:23.139 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:27:23.139 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:27:23.139 | 99.99th=[44827] 00:27:23.139 bw ( KiB/s): min= 352, max= 384, per=99.66%, avg=379.20, stdev=11.72, samples=20 00:27:23.139 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:27:23.139 lat (msec) : 50=100.00% 00:27:23.139 cpu : usr=85.57%, sys=14.16%, ctx=21, majf=0, minf=171 00:27:23.139 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:23.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:23.139 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:23.139 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:23.139 00:27:23.139 Run status group 0 (all jobs): 00:27:23.140 READ: bw=380KiB/s (389kB/s), 380KiB/s-380KiB/s (389kB/s-389kB/s), io=3808KiB (3899kB), run=10013-10013msec 00:27:23.140 21:42:44 -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:23.140 21:42:44 -- target/dif.sh@43 -- # local sub 00:27:23.140 21:42:44 -- target/dif.sh@45 -- # for sub in "$@" 00:27:23.140 21:42:44 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:23.140 21:42:44 -- target/dif.sh@36 -- # local sub_id=0 00:27:23.140 21:42:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:23.140 21:42:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.140 21:42:44 -- common/autotest_common.sh@10 -- # set +x 00:27:23.140 21:42:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.140 21:42:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:23.140 21:42:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.140 21:42:44 -- common/autotest_common.sh@10 -- # set +x 00:27:23.140 21:42:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.140 00:27:23.140 real 0m11.273s 00:27:23.140 user 0m17.474s 00:27:23.140 sys 0m1.830s 00:27:23.140 21:42:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:23.140 21:42:44 -- common/autotest_common.sh@10 -- # set +x 00:27:23.140 ************************************ 00:27:23.140 END TEST fio_dif_1_default 00:27:23.140 ************************************ 00:27:23.140 21:42:44 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:23.140 21:42:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:23.140 21:42:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:23.140 21:42:44 -- common/autotest_common.sh@10 -- # set +x 00:27:23.140 ************************************ 00:27:23.140 START TEST fio_dif_1_multi_subsystems 00:27:23.140 ************************************ 00:27:23.140 21:42:44 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:27:23.140 21:42:44 -- target/dif.sh@92 -- # local files=1 00:27:23.140 21:42:44 -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:23.140 21:42:44 -- target/dif.sh@28 -- # local sub 00:27:23.140 21:42:44 -- target/dif.sh@30 -- # for sub in "$@" 00:27:23.140 21:42:44 -- target/dif.sh@31 -- # create_subsystem 0 00:27:23.140 21:42:44 -- target/dif.sh@18 -- # local sub_id=0 00:27:23.140 21:42:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:23.140 21:42:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.140 21:42:44 -- common/autotest_common.sh@10 -- # set +x 00:27:23.140 bdev_null0 00:27:23.140 21:42:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.140 21:42:44 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:23.140 21:42:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.140 21:42:44 -- common/autotest_common.sh@10 -- # set +x 00:27:23.140 21:42:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.140 21:42:44 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:23.140 21:42:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.140 21:42:44 -- common/autotest_common.sh@10 -- # set +x 00:27:23.140 21:42:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.140 21:42:44 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:23.140 21:42:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.140 21:42:44 -- common/autotest_common.sh@10 -- # set +x 00:27:23.140 [2024-04-24 21:42:44.679876] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:23.140 21:42:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.140 21:42:44 -- target/dif.sh@30 -- # for sub in "$@" 00:27:23.140 21:42:44 -- target/dif.sh@31 -- # create_subsystem 1 00:27:23.140 21:42:44 -- target/dif.sh@18 -- # local sub_id=1 00:27:23.140 21:42:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:23.140 21:42:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.140 21:42:44 -- common/autotest_common.sh@10 -- # set +x 00:27:23.140 bdev_null1 00:27:23.140 21:42:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.140 21:42:44 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:23.140 21:42:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.140 21:42:44 -- common/autotest_common.sh@10 -- # set +x 00:27:23.140 21:42:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.140 21:42:44 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:23.140 21:42:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.140 21:42:44 -- common/autotest_common.sh@10 -- # set +x 00:27:23.140 21:42:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.140 21:42:44 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:23.140 21:42:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.140 21:42:44 -- common/autotest_common.sh@10 -- # set +x 00:27:23.140 21:42:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.140 21:42:44 -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:23.140 21:42:44 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:23.140 21:42:44 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:23.140 21:42:44 -- nvmf/common.sh@521 -- # config=() 00:27:23.140 21:42:44 -- nvmf/common.sh@521 -- # local subsystem config 00:27:23.140 21:42:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:23.140 21:42:44 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:23.140 21:42:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:23.140 { 00:27:23.140 "params": { 00:27:23.140 "name": "Nvme$subsystem", 00:27:23.140 "trtype": "$TEST_TRANSPORT", 00:27:23.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:23.140 "adrfam": "ipv4", 00:27:23.140 "trsvcid": "$NVMF_PORT", 00:27:23.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:23.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:23.140 "hdgst": ${hdgst:-false}, 00:27:23.140 "ddgst": ${ddgst:-false} 00:27:23.140 }, 00:27:23.140 "method": "bdev_nvme_attach_controller" 00:27:23.140 } 00:27:23.140 EOF 00:27:23.140 )") 00:27:23.140 21:42:44 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:23.140 21:42:44 -- target/dif.sh@82 -- # gen_fio_conf 00:27:23.140 21:42:44 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:23.140 21:42:44 -- target/dif.sh@54 -- # local file 00:27:23.140 21:42:44 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:23.140 21:42:44 -- target/dif.sh@56 -- # cat 00:27:23.140 21:42:44 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:23.140 21:42:44 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:23.140 21:42:44 -- common/autotest_common.sh@1327 -- # shift 00:27:23.140 21:42:44 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:23.140 21:42:44 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:23.140 21:42:44 -- nvmf/common.sh@543 -- # cat 00:27:23.140 21:42:44 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:23.140 21:42:44 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:23.140 21:42:44 -- target/dif.sh@72 -- # (( file <= files )) 00:27:23.140 21:42:44 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:23.140 21:42:44 -- target/dif.sh@73 -- # cat 00:27:23.140 21:42:44 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:23.140 21:42:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:23.140 21:42:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:23.140 { 00:27:23.140 "params": { 00:27:23.140 "name": "Nvme$subsystem", 00:27:23.140 "trtype": "$TEST_TRANSPORT", 00:27:23.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:23.140 "adrfam": "ipv4", 00:27:23.140 "trsvcid": "$NVMF_PORT", 00:27:23.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:23.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:23.140 "hdgst": ${hdgst:-false}, 00:27:23.140 "ddgst": ${ddgst:-false} 00:27:23.140 }, 00:27:23.140 "method": "bdev_nvme_attach_controller" 00:27:23.140 } 00:27:23.140 EOF 00:27:23.140 )") 00:27:23.141 21:42:44 -- target/dif.sh@72 -- # (( file++ )) 00:27:23.141 21:42:44 -- nvmf/common.sh@543 -- # cat 00:27:23.141 21:42:44 -- target/dif.sh@72 -- # (( file <= files )) 00:27:23.141 21:42:44 -- nvmf/common.sh@545 -- # jq . 00:27:23.141 21:42:44 -- nvmf/common.sh@546 -- # IFS=, 00:27:23.141 21:42:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:23.141 "params": { 00:27:23.141 "name": "Nvme0", 00:27:23.141 "trtype": "tcp", 00:27:23.141 "traddr": "10.0.0.2", 00:27:23.141 "adrfam": "ipv4", 00:27:23.141 "trsvcid": "4420", 00:27:23.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:23.141 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:23.141 "hdgst": false, 00:27:23.141 "ddgst": false 00:27:23.141 }, 00:27:23.141 "method": "bdev_nvme_attach_controller" 00:27:23.141 },{ 00:27:23.141 "params": { 00:27:23.141 "name": "Nvme1", 00:27:23.141 "trtype": "tcp", 00:27:23.141 "traddr": "10.0.0.2", 00:27:23.141 "adrfam": "ipv4", 00:27:23.141 "trsvcid": "4420", 00:27:23.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:23.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:23.141 "hdgst": false, 00:27:23.141 "ddgst": false 00:27:23.141 }, 00:27:23.141 "method": "bdev_nvme_attach_controller" 00:27:23.141 }' 00:27:23.141 21:42:44 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:23.141 21:42:44 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:23.141 21:42:44 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:23.141 21:42:44 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:23.141 21:42:44 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:23.141 21:42:44 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:23.141 21:42:44 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:23.141 21:42:44 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:23.141 21:42:44 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:23.141 21:42:44 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:23.141 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:23.141 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:23.141 fio-3.35 00:27:23.141 Starting 2 threads 00:27:23.141 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.338 00:27:35.338 filename0: (groupid=0, jobs=1): err= 0: pid=3022589: Wed Apr 24 21:42:56 2024 00:27:35.338 read: IOPS=181, BW=726KiB/s (743kB/s)(7280KiB/10033msec) 00:27:35.338 slat (nsec): min=3788, max=22364, avg=6644.13, stdev=1981.47 00:27:35.338 clat (usec): min=701, max=46831, avg=22031.24, stdev=20376.28 00:27:35.338 lat (usec): min=707, max=46843, avg=22037.88, stdev=20375.69 00:27:35.338 clat percentiles (usec): 00:27:35.338 | 1.00th=[ 1516], 5.00th=[ 1582], 10.00th=[ 1598], 20.00th=[ 1598], 00:27:35.338 | 30.00th=[ 1614], 40.00th=[ 1631], 50.00th=[41157], 60.00th=[42206], 00:27:35.338 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:35.338 | 99.00th=[42730], 99.50th=[43254], 99.90th=[46924], 99.95th=[46924], 00:27:35.338 | 99.99th=[46924] 00:27:35.338 bw ( KiB/s): min= 704, max= 768, per=65.69%, avg=726.40, stdev=29.55, samples=20 00:27:35.338 iops : min= 176, max= 192, avg=181.60, stdev= 7.39, samples=20 00:27:35.338 lat (usec) : 750=0.22%, 1000=0.22% 00:27:35.338 lat (msec) : 2=49.29%, 4=0.16%, 50=50.11% 00:27:35.338 cpu : usr=93.73%, sys=6.02%, ctx=13, majf=0, minf=61 00:27:35.338 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:35.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.338 issued rwts: total=1820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:35.338 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:35.338 filename1: (groupid=0, jobs=1): err= 0: pid=3022590: Wed Apr 24 21:42:56 2024 00:27:35.338 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10006msec) 00:27:35.338 slat (nsec): min=5587, max=37023, avg=7222.84, stdev=2606.30 00:27:35.338 clat (usec): min=41809, max=43546, avg=42018.79, stdev=202.28 00:27:35.338 lat (usec): min=41815, max=43570, avg=42026.01, stdev=202.82 00:27:35.338 clat percentiles (usec): 00:27:35.338 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:27:35.338 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:27:35.338 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:35.338 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:27:35.338 | 99.99th=[43779] 00:27:35.338 bw ( KiB/s): min= 352, max= 384, per=34.38%, avg=380.63, stdev=10.09, samples=19 00:27:35.338 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:27:35.338 lat (msec) : 50=100.00% 00:27:35.338 cpu : usr=93.32%, sys=6.43%, ctx=17, majf=0, minf=152 00:27:35.338 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:35.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.338 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:35.338 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:35.338 00:27:35.338 Run status group 0 (all jobs): 00:27:35.338 READ: bw=1105KiB/s (1132kB/s), 381KiB/s-726KiB/s (390kB/s-743kB/s), io=10.8MiB (11.4MB), run=10006-10033msec 00:27:35.338 21:42:56 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:35.338 21:42:56 -- target/dif.sh@43 -- # local sub 00:27:35.338 21:42:56 -- target/dif.sh@45 -- # for sub in "$@" 00:27:35.338 21:42:56 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:35.338 21:42:56 -- target/dif.sh@36 -- # local sub_id=0 00:27:35.338 21:42:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:35.338 21:42:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.338 21:42:56 -- common/autotest_common.sh@10 -- # set +x 00:27:35.338 21:42:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:35.338 21:42:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:35.338 21:42:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.338 21:42:56 -- common/autotest_common.sh@10 -- # set +x 00:27:35.338 21:42:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:35.338 21:42:56 -- target/dif.sh@45 -- # for sub in "$@" 00:27:35.338 21:42:56 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:35.338 21:42:56 -- target/dif.sh@36 -- # local sub_id=1 00:27:35.338 21:42:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:35.338 21:42:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.338 21:42:56 -- common/autotest_common.sh@10 -- # set +x 00:27:35.338 21:42:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:35.338 21:42:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:35.338 21:42:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.338 21:42:56 -- common/autotest_common.sh@10 -- # set +x 00:27:35.338 21:42:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:35.338 00:27:35.338 real 0m11.570s 00:27:35.338 user 0m27.900s 00:27:35.338 sys 0m1.620s 00:27:35.338 21:42:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:35.338 21:42:56 -- common/autotest_common.sh@10 -- # set +x 00:27:35.338 ************************************ 00:27:35.338 END TEST fio_dif_1_multi_subsystems 00:27:35.338 ************************************ 00:27:35.338 21:42:56 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:35.338 21:42:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:35.338 21:42:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:35.338 21:42:56 -- common/autotest_common.sh@10 -- # set +x 00:27:35.338 ************************************ 00:27:35.338 START TEST fio_dif_rand_params 00:27:35.338 ************************************ 00:27:35.338 21:42:56 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:27:35.338 21:42:56 -- target/dif.sh@100 -- # local NULL_DIF 00:27:35.338 21:42:56 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:35.338 21:42:56 -- target/dif.sh@103 -- # NULL_DIF=3 00:27:35.338 21:42:56 -- target/dif.sh@103 -- # bs=128k 00:27:35.338 21:42:56 -- target/dif.sh@103 -- # numjobs=3 00:27:35.338 21:42:56 -- target/dif.sh@103 -- # iodepth=3 00:27:35.338 21:42:56 -- target/dif.sh@103 -- # runtime=5 00:27:35.338 21:42:56 -- target/dif.sh@105 -- # create_subsystems 0 00:27:35.338 21:42:56 -- target/dif.sh@28 -- # local sub 00:27:35.338 21:42:56 -- target/dif.sh@30 -- # for sub in "$@" 00:27:35.338 21:42:56 -- target/dif.sh@31 -- # create_subsystem 0 00:27:35.338 21:42:56 -- target/dif.sh@18 -- # local sub_id=0 00:27:35.338 21:42:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:35.338 21:42:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.338 21:42:56 -- common/autotest_common.sh@10 -- # set +x 00:27:35.338 bdev_null0 00:27:35.338 21:42:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:35.338 21:42:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:35.338 21:42:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.338 21:42:56 -- common/autotest_common.sh@10 -- # set +x 00:27:35.338 21:42:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:35.339 21:42:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:35.339 21:42:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.339 21:42:56 -- common/autotest_common.sh@10 -- # set +x 00:27:35.339 21:42:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:35.339 21:42:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:35.339 21:42:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.339 21:42:56 -- common/autotest_common.sh@10 -- # set +x 00:27:35.339 [2024-04-24 21:42:56.441485] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.339 21:42:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:35.339 21:42:56 -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:35.339 21:42:56 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:35.339 21:42:56 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:35.339 21:42:56 -- nvmf/common.sh@521 -- # config=() 00:27:35.339 21:42:56 -- nvmf/common.sh@521 -- # local subsystem config 00:27:35.339 21:42:56 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:35.339 21:42:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:35.339 21:42:56 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:35.339 21:42:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:35.339 { 00:27:35.339 "params": { 00:27:35.339 "name": "Nvme$subsystem", 00:27:35.339 "trtype": "$TEST_TRANSPORT", 00:27:35.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.339 "adrfam": "ipv4", 00:27:35.339 "trsvcid": "$NVMF_PORT", 00:27:35.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.339 "hdgst": ${hdgst:-false}, 00:27:35.339 "ddgst": ${ddgst:-false} 00:27:35.339 }, 00:27:35.339 "method": "bdev_nvme_attach_controller" 00:27:35.339 } 00:27:35.339 EOF 00:27:35.339 )") 00:27:35.339 21:42:56 -- target/dif.sh@82 -- # gen_fio_conf 00:27:35.339 21:42:56 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:35.339 21:42:56 -- target/dif.sh@54 -- # local file 00:27:35.339 21:42:56 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:35.339 21:42:56 -- target/dif.sh@56 -- # cat 00:27:35.339 21:42:56 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:35.339 21:42:56 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:35.339 21:42:56 -- common/autotest_common.sh@1327 -- # shift 00:27:35.339 21:42:56 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:35.339 21:42:56 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:35.339 21:42:56 -- nvmf/common.sh@543 -- # cat 00:27:35.339 21:42:56 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:35.339 21:42:56 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:35.339 21:42:56 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:35.339 21:42:56 -- target/dif.sh@72 -- # (( file <= files )) 00:27:35.339 21:42:56 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:35.339 21:42:56 -- nvmf/common.sh@545 -- # jq . 00:27:35.339 21:42:56 -- nvmf/common.sh@546 -- # IFS=, 00:27:35.339 21:42:56 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:35.339 "params": { 00:27:35.339 "name": "Nvme0", 00:27:35.339 "trtype": "tcp", 00:27:35.339 "traddr": "10.0.0.2", 00:27:35.339 "adrfam": "ipv4", 00:27:35.339 "trsvcid": "4420", 00:27:35.339 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:35.339 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:35.339 "hdgst": false, 00:27:35.339 "ddgst": false 00:27:35.339 }, 00:27:35.339 "method": "bdev_nvme_attach_controller" 00:27:35.339 }' 00:27:35.339 21:42:56 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:35.339 21:42:56 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:35.339 21:42:56 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:35.339 21:42:56 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:35.339 21:42:56 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:35.339 21:42:56 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:35.339 21:42:56 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:35.339 21:42:56 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:35.339 21:42:56 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:35.339 21:42:56 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:35.339 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:35.339 ... 00:27:35.339 fio-3.35 00:27:35.339 Starting 3 threads 00:27:35.339 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.522 00:27:39.522 filename0: (groupid=0, jobs=1): err= 0: pid=3024607: Wed Apr 24 21:43:02 2024 00:27:39.522 read: IOPS=191, BW=24.0MiB/s (25.1MB/s)(121MiB/5045msec) 00:27:39.522 slat (usec): min=5, max=120, avg= 8.99, stdev= 4.42 00:27:39.522 clat (usec): min=4592, max=92999, avg=15580.35, stdev=15738.44 00:27:39.522 lat (usec): min=4599, max=93009, avg=15589.34, stdev=15738.94 00:27:39.522 clat percentiles (usec): 00:27:39.522 | 1.00th=[ 4948], 5.00th=[ 5538], 10.00th=[ 6063], 20.00th=[ 6980], 00:27:39.522 | 30.00th=[ 7767], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[10159], 00:27:39.522 | 70.00th=[11863], 80.00th=[14222], 90.00th=[51119], 95.00th=[53740], 00:27:39.522 | 99.00th=[56886], 99.50th=[57934], 99.90th=[92799], 99.95th=[92799], 00:27:39.522 | 99.99th=[92799] 00:27:39.522 bw ( KiB/s): min=16128, max=34560, per=27.76%, avg=24704.00, stdev=6144.30, samples=10 00:27:39.522 iops : min= 126, max= 270, avg=193.00, stdev=48.00, samples=10 00:27:39.522 lat (msec) : 10=58.37%, 20=26.96%, 50=2.69%, 100=11.98% 00:27:39.522 cpu : usr=91.67%, sys=7.87%, ctx=7, majf=0, minf=160 00:27:39.522 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:39.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:39.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:39.522 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:39.522 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:39.522 filename0: (groupid=0, jobs=1): err= 0: pid=3024608: Wed Apr 24 21:43:02 2024 00:27:39.522 read: IOPS=198, BW=24.9MiB/s (26.1MB/s)(125MiB/5007msec) 00:27:39.522 slat (nsec): min=5759, max=23974, avg=8298.17, stdev=2605.77 00:27:39.522 clat (usec): min=4625, max=57216, avg=15065.97, stdev=15318.40 00:27:39.522 lat (usec): min=4633, max=57227, avg=15074.27, stdev=15318.50 00:27:39.522 clat percentiles (usec): 00:27:39.522 | 1.00th=[ 5080], 5.00th=[ 5538], 10.00th=[ 6128], 20.00th=[ 6849], 00:27:39.522 | 30.00th=[ 7439], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[10028], 00:27:39.522 | 70.00th=[11469], 80.00th=[13698], 90.00th=[50594], 95.00th=[52691], 00:27:39.522 | 99.00th=[56361], 99.50th=[56886], 99.90th=[57410], 99.95th=[57410], 00:27:39.522 | 99.99th=[57410] 00:27:39.522 bw ( KiB/s): min=19968, max=33024, per=28.57%, avg=25420.80, stdev=5176.65, samples=10 00:27:39.522 iops : min= 156, max= 258, avg=198.60, stdev=40.44, samples=10 00:27:39.522 lat (msec) : 10=59.64%, 20=26.20%, 50=3.71%, 100=10.44% 00:27:39.522 cpu : usr=92.75%, sys=6.83%, ctx=7, majf=0, minf=84 00:27:39.522 IO depths : 1=4.7%, 2=95.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:39.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:39.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:39.522 issued rwts: total=996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:39.522 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:39.522 filename0: (groupid=0, jobs=1): err= 0: pid=3024609: Wed Apr 24 21:43:02 2024 00:27:39.522 read: IOPS=308, BW=38.5MiB/s (40.4MB/s)(193MiB/5009msec) 00:27:39.522 slat (nsec): min=5762, max=25454, avg=8296.66, stdev=2378.16 00:27:39.522 clat (usec): min=4527, max=58149, avg=9721.28, stdev=10401.57 00:27:39.522 lat (usec): min=4534, max=58160, avg=9729.58, stdev=10401.86 00:27:39.522 clat percentiles (usec): 00:27:39.522 | 1.00th=[ 4817], 5.00th=[ 5014], 10.00th=[ 5342], 20.00th=[ 5669], 00:27:39.522 | 30.00th=[ 5997], 40.00th=[ 6390], 50.00th=[ 6783], 60.00th=[ 7308], 00:27:39.522 | 70.00th=[ 8029], 80.00th=[ 8848], 90.00th=[11600], 95.00th=[47973], 00:27:39.522 | 99.00th=[53740], 99.50th=[55313], 99.90th=[57934], 99.95th=[57934], 00:27:39.522 | 99.99th=[57934] 00:27:39.522 bw ( KiB/s): min=22272, max=52992, per=44.34%, avg=39449.60, stdev=9972.29, samples=10 00:27:39.522 iops : min= 174, max= 414, avg=308.20, stdev=77.91, samples=10 00:27:39.522 lat (msec) : 10=86.65%, 20=7.52%, 50=2.85%, 100=2.98% 00:27:39.522 cpu : usr=90.93%, sys=8.61%, ctx=8, majf=0, minf=28 00:27:39.522 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:39.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:39.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:39.522 issued rwts: total=1543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:39.522 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:39.522 00:27:39.522 Run status group 0 (all jobs): 00:27:39.522 READ: bw=86.9MiB/s (91.1MB/s), 24.0MiB/s-38.5MiB/s (25.1MB/s-40.4MB/s), io=438MiB (460MB), run=5007-5045msec 00:27:39.781 21:43:02 -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:39.781 21:43:02 -- target/dif.sh@43 -- # local sub 00:27:39.781 21:43:02 -- target/dif.sh@45 -- # for sub in "$@" 00:27:39.781 21:43:02 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:39.781 21:43:02 -- target/dif.sh@36 -- # local sub_id=0 00:27:39.781 21:43:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:39.781 21:43:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.781 21:43:02 -- common/autotest_common.sh@10 -- # set +x 00:27:39.781 21:43:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.781 21:43:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:39.781 21:43:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.781 21:43:02 -- common/autotest_common.sh@10 -- # set +x 00:27:39.781 21:43:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.781 21:43:02 -- target/dif.sh@109 -- # NULL_DIF=2 00:27:39.781 21:43:02 -- target/dif.sh@109 -- # bs=4k 00:27:39.781 21:43:02 -- target/dif.sh@109 -- # numjobs=8 00:27:39.781 21:43:02 -- target/dif.sh@109 -- # iodepth=16 00:27:39.781 21:43:02 -- target/dif.sh@109 -- # runtime= 00:27:39.781 21:43:02 -- target/dif.sh@109 -- # files=2 00:27:39.781 21:43:02 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:39.781 21:43:02 -- target/dif.sh@28 -- # local sub 00:27:39.781 21:43:02 -- target/dif.sh@30 -- # for sub in "$@" 00:27:39.781 21:43:02 -- target/dif.sh@31 -- # create_subsystem 0 00:27:39.781 21:43:02 -- target/dif.sh@18 -- # local sub_id=0 00:27:39.781 21:43:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:39.781 21:43:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.781 21:43:02 -- common/autotest_common.sh@10 -- # set +x 00:27:39.781 bdev_null0 00:27:39.781 21:43:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.781 21:43:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:39.781 21:43:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.781 21:43:02 -- common/autotest_common.sh@10 -- # set +x 00:27:39.781 21:43:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.781 21:43:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:39.781 21:43:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.781 21:43:02 -- common/autotest_common.sh@10 -- # set +x 00:27:39.781 21:43:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.781 21:43:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:39.781 21:43:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.781 21:43:02 -- common/autotest_common.sh@10 -- # set +x 00:27:39.781 [2024-04-24 21:43:02.578709] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.781 21:43:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.781 21:43:02 -- target/dif.sh@30 -- # for sub in "$@" 00:27:39.781 21:43:02 -- target/dif.sh@31 -- # create_subsystem 1 00:27:39.781 21:43:02 -- target/dif.sh@18 -- # local sub_id=1 00:27:39.781 21:43:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:39.781 21:43:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.781 21:43:02 -- common/autotest_common.sh@10 -- # set +x 00:27:39.781 bdev_null1 00:27:39.781 21:43:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.781 21:43:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:39.781 21:43:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.781 21:43:02 -- common/autotest_common.sh@10 -- # set +x 00:27:39.781 21:43:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.781 21:43:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:39.781 21:43:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.781 21:43:02 -- common/autotest_common.sh@10 -- # set +x 00:27:39.781 21:43:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.781 21:43:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:39.781 21:43:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.781 21:43:02 -- common/autotest_common.sh@10 -- # set +x 00:27:39.781 21:43:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.781 21:43:02 -- target/dif.sh@30 -- # for sub in "$@" 00:27:39.781 21:43:02 -- target/dif.sh@31 -- # create_subsystem 2 00:27:39.781 21:43:02 -- target/dif.sh@18 -- # local sub_id=2 00:27:39.781 21:43:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:39.781 21:43:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.781 21:43:02 -- common/autotest_common.sh@10 -- # set +x 00:27:39.781 bdev_null2 00:27:39.781 21:43:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.781 21:43:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:39.781 21:43:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.781 21:43:02 -- common/autotest_common.sh@10 -- # set +x 00:27:39.781 21:43:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.781 21:43:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:39.781 21:43:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.781 21:43:02 -- common/autotest_common.sh@10 -- # set +x 00:27:39.781 21:43:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.781 21:43:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:39.781 21:43:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:39.781 21:43:02 -- common/autotest_common.sh@10 -- # set +x 00:27:39.781 21:43:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:39.781 21:43:02 -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:39.781 21:43:02 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:39.781 21:43:02 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:39.781 21:43:02 -- nvmf/common.sh@521 -- # config=() 00:27:39.781 21:43:02 -- nvmf/common.sh@521 -- # local subsystem config 00:27:39.781 21:43:02 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:39.781 21:43:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:39.781 21:43:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:39.781 { 00:27:39.781 "params": { 00:27:39.781 "name": "Nvme$subsystem", 00:27:39.781 "trtype": "$TEST_TRANSPORT", 00:27:39.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.781 "adrfam": "ipv4", 00:27:39.781 "trsvcid": "$NVMF_PORT", 00:27:39.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.781 "hdgst": ${hdgst:-false}, 00:27:39.781 "ddgst": ${ddgst:-false} 00:27:39.781 }, 00:27:39.781 "method": "bdev_nvme_attach_controller" 00:27:39.781 } 00:27:39.781 EOF 00:27:39.781 )") 00:27:39.781 21:43:02 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:39.781 21:43:02 -- target/dif.sh@82 -- # gen_fio_conf 00:27:39.781 21:43:02 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:39.781 21:43:02 -- target/dif.sh@54 -- # local file 00:27:39.781 21:43:02 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:39.781 21:43:02 -- target/dif.sh@56 -- # cat 00:27:39.781 21:43:02 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:39.781 21:43:02 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:39.781 21:43:02 -- common/autotest_common.sh@1327 -- # shift 00:27:39.781 21:43:02 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:39.781 21:43:02 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:39.781 21:43:02 -- nvmf/common.sh@543 -- # cat 00:27:39.781 21:43:02 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:39.781 21:43:02 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:39.782 21:43:02 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:39.782 21:43:02 -- target/dif.sh@72 -- # (( file <= files )) 00:27:39.782 21:43:02 -- target/dif.sh@73 -- # cat 00:27:39.782 21:43:02 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:39.782 21:43:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:40.040 21:43:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:40.040 { 00:27:40.040 "params": { 00:27:40.040 "name": "Nvme$subsystem", 00:27:40.040 "trtype": "$TEST_TRANSPORT", 00:27:40.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.040 "adrfam": "ipv4", 00:27:40.040 "trsvcid": "$NVMF_PORT", 00:27:40.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.040 "hdgst": ${hdgst:-false}, 00:27:40.040 "ddgst": ${ddgst:-false} 00:27:40.040 }, 00:27:40.040 "method": "bdev_nvme_attach_controller" 00:27:40.040 } 00:27:40.040 EOF 00:27:40.040 )") 00:27:40.040 21:43:02 -- target/dif.sh@72 -- # (( file++ )) 00:27:40.040 21:43:02 -- nvmf/common.sh@543 -- # cat 00:27:40.040 21:43:02 -- target/dif.sh@72 -- # (( file <= files )) 00:27:40.040 21:43:02 -- target/dif.sh@73 -- # cat 00:27:40.040 21:43:02 -- target/dif.sh@72 -- # (( file++ )) 00:27:40.040 21:43:02 -- target/dif.sh@72 -- # (( file <= files )) 00:27:40.040 21:43:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:40.040 21:43:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:40.040 { 00:27:40.040 "params": { 00:27:40.040 "name": "Nvme$subsystem", 00:27:40.040 "trtype": "$TEST_TRANSPORT", 00:27:40.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.040 "adrfam": "ipv4", 00:27:40.040 "trsvcid": "$NVMF_PORT", 00:27:40.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.040 "hdgst": ${hdgst:-false}, 00:27:40.040 "ddgst": ${ddgst:-false} 00:27:40.040 }, 00:27:40.040 "method": "bdev_nvme_attach_controller" 00:27:40.040 } 00:27:40.040 EOF 00:27:40.040 )") 00:27:40.040 21:43:02 -- nvmf/common.sh@543 -- # cat 00:27:40.040 21:43:02 -- nvmf/common.sh@545 -- # jq . 00:27:40.040 21:43:02 -- nvmf/common.sh@546 -- # IFS=, 00:27:40.040 21:43:02 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:40.040 "params": { 00:27:40.040 "name": "Nvme0", 00:27:40.040 "trtype": "tcp", 00:27:40.040 "traddr": "10.0.0.2", 00:27:40.040 "adrfam": "ipv4", 00:27:40.040 "trsvcid": "4420", 00:27:40.040 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:40.040 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:40.040 "hdgst": false, 00:27:40.040 "ddgst": false 00:27:40.040 }, 00:27:40.040 "method": "bdev_nvme_attach_controller" 00:27:40.040 },{ 00:27:40.040 "params": { 00:27:40.040 "name": "Nvme1", 00:27:40.040 "trtype": "tcp", 00:27:40.040 "traddr": "10.0.0.2", 00:27:40.040 "adrfam": "ipv4", 00:27:40.040 "trsvcid": "4420", 00:27:40.040 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:40.040 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:40.040 "hdgst": false, 00:27:40.040 "ddgst": false 00:27:40.040 }, 00:27:40.040 "method": "bdev_nvme_attach_controller" 00:27:40.040 },{ 00:27:40.040 "params": { 00:27:40.040 "name": "Nvme2", 00:27:40.040 "trtype": "tcp", 00:27:40.040 "traddr": "10.0.0.2", 00:27:40.040 "adrfam": "ipv4", 00:27:40.040 "trsvcid": "4420", 00:27:40.040 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:40.040 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:40.040 "hdgst": false, 00:27:40.040 "ddgst": false 00:27:40.040 }, 00:27:40.040 "method": "bdev_nvme_attach_controller" 00:27:40.040 }' 00:27:40.040 21:43:02 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:40.040 21:43:02 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:40.040 21:43:02 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:40.040 21:43:02 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:40.040 21:43:02 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:40.040 21:43:02 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:40.040 21:43:02 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:40.040 21:43:02 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:40.040 21:43:02 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:40.040 21:43:02 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:40.298 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:40.298 ... 00:27:40.298 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:40.298 ... 00:27:40.298 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:40.298 ... 00:27:40.298 fio-3.35 00:27:40.298 Starting 24 threads 00:27:40.298 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.531 00:27:52.531 filename0: (groupid=0, jobs=1): err= 0: pid=3025817: Wed Apr 24 21:43:14 2024 00:27:52.531 read: IOPS=584, BW=2338KiB/s (2394kB/s)(22.9MiB/10018msec) 00:27:52.531 slat (nsec): min=6222, max=64321, avg=12692.16, stdev=7679.57 00:27:52.531 clat (usec): min=4030, max=50674, avg=27290.95, stdev=5976.19 00:27:52.531 lat (usec): min=4044, max=50710, avg=27303.65, stdev=5977.24 00:27:52.531 clat percentiles (usec): 00:27:52.531 | 1.00th=[11338], 5.00th=[17171], 10.00th=[22676], 20.00th=[24249], 00:27:52.531 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:27:52.531 | 70.00th=[30802], 80.00th=[32375], 90.00th=[34341], 95.00th=[35914], 00:27:52.531 | 99.00th=[44827], 99.50th=[49021], 99.90th=[50594], 99.95th=[50594], 00:27:52.531 | 99.99th=[50594] 00:27:52.531 bw ( KiB/s): min= 2048, max= 2784, per=4.06%, avg=2339.25, stdev=172.78, samples=20 00:27:52.531 iops : min= 512, max= 696, avg=584.70, stdev=43.19, samples=20 00:27:52.531 lat (msec) : 10=0.55%, 20=8.15%, 50=91.15%, 100=0.15% 00:27:52.531 cpu : usr=97.25%, sys=2.31%, ctx=19, majf=0, minf=40 00:27:52.531 IO depths : 1=0.7%, 2=1.5%, 4=7.1%, 8=77.8%, 16=12.8%, 32=0.0%, >=64=0.0% 00:27:52.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.531 complete : 0=0.0%, 4=89.5%, 8=5.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.531 issued rwts: total=5856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.531 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.531 filename0: (groupid=0, jobs=1): err= 0: pid=3025818: Wed Apr 24 21:43:14 2024 00:27:52.531 read: IOPS=604, BW=2418KiB/s (2476kB/s)(23.6MiB/10005msec) 00:27:52.531 slat (nsec): min=5669, max=72902, avg=18008.64, stdev=13486.23 00:27:52.531 clat (usec): min=5358, max=57673, avg=26381.37, stdev=4373.98 00:27:52.531 lat (usec): min=5364, max=57688, avg=26399.38, stdev=4373.10 00:27:52.531 clat percentiles (usec): 00:27:52.531 | 1.00th=[15664], 5.00th=[22676], 10.00th=[23462], 20.00th=[24249], 00:27:52.531 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:27:52.531 | 70.00th=[26084], 80.00th=[29230], 90.00th=[32375], 95.00th=[34866], 00:27:52.531 | 99.00th=[40109], 99.50th=[41681], 99.90th=[50594], 99.95th=[50594], 00:27:52.531 | 99.99th=[57934] 00:27:52.531 bw ( KiB/s): min= 2256, max= 2560, per=4.18%, avg=2407.11, stdev=91.01, samples=19 00:27:52.531 iops : min= 564, max= 640, avg=601.58, stdev=22.88, samples=19 00:27:52.531 lat (msec) : 10=0.38%, 20=3.01%, 50=96.35%, 100=0.26% 00:27:52.531 cpu : usr=97.08%, sys=2.48%, ctx=17, majf=0, minf=27 00:27:52.531 IO depths : 1=0.1%, 2=0.1%, 4=4.6%, 8=79.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:27:52.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.531 complete : 0=0.0%, 4=89.7%, 8=7.5%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.531 issued rwts: total=6049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.531 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.531 filename0: (groupid=0, jobs=1): err= 0: pid=3025819: Wed Apr 24 21:43:14 2024 00:27:52.531 read: IOPS=570, BW=2284KiB/s (2338kB/s)(22.3MiB/10005msec) 00:27:52.531 slat (nsec): min=6238, max=71707, avg=19084.78, stdev=12187.20 00:27:52.531 clat (usec): min=9717, max=57000, avg=27899.01, stdev=5297.70 00:27:52.531 lat (usec): min=9730, max=57017, avg=27918.10, stdev=5296.25 00:27:52.531 clat percentiles (usec): 00:27:52.531 | 1.00th=[15664], 5.00th=[21365], 10.00th=[23462], 20.00th=[24249], 00:27:52.531 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[28181], 00:27:52.531 | 70.00th=[31327], 80.00th=[32900], 90.00th=[34866], 95.00th=[35914], 00:27:52.531 | 99.00th=[41157], 99.50th=[44827], 99.90th=[56886], 99.95th=[56886], 00:27:52.531 | 99.99th=[56886] 00:27:52.531 bw ( KiB/s): min= 1916, max= 2456, per=3.95%, avg=2274.37, stdev=176.22, samples=19 00:27:52.531 iops : min= 479, max= 614, avg=568.47, stdev=44.02, samples=19 00:27:52.532 lat (msec) : 10=0.02%, 20=4.45%, 50=95.26%, 100=0.28% 00:27:52.532 cpu : usr=96.96%, sys=2.60%, ctx=16, majf=0, minf=23 00:27:52.532 IO depths : 1=1.6%, 2=3.3%, 4=11.7%, 8=71.8%, 16=11.7%, 32=0.0%, >=64=0.0% 00:27:52.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 complete : 0=0.0%, 4=90.8%, 8=4.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.532 filename0: (groupid=0, jobs=1): err= 0: pid=3025820: Wed Apr 24 21:43:14 2024 00:27:52.532 read: IOPS=568, BW=2274KiB/s (2329kB/s)(22.2MiB/10005msec) 00:27:52.532 slat (nsec): min=6194, max=83706, avg=18524.21, stdev=11569.38 00:27:52.532 clat (usec): min=12690, max=61634, avg=28034.50, stdev=5834.69 00:27:52.532 lat (usec): min=12704, max=61651, avg=28053.02, stdev=5834.58 00:27:52.532 clat percentiles (usec): 00:27:52.532 | 1.00th=[15401], 5.00th=[20317], 10.00th=[23462], 20.00th=[24249], 00:27:52.532 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[27132], 00:27:52.532 | 70.00th=[31065], 80.00th=[33162], 90.00th=[35390], 95.00th=[37487], 00:27:52.532 | 99.00th=[48497], 99.50th=[50070], 99.90th=[55313], 99.95th=[61604], 00:27:52.532 | 99.99th=[61604] 00:27:52.532 bw ( KiB/s): min= 2048, max= 2480, per=3.93%, avg=2265.89, stdev=139.29, samples=19 00:27:52.532 iops : min= 512, max= 620, avg=566.32, stdev=34.77, samples=19 00:27:52.532 lat (msec) : 20=4.80%, 50=94.76%, 100=0.44% 00:27:52.532 cpu : usr=97.15%, sys=2.41%, ctx=22, majf=0, minf=40 00:27:52.532 IO depths : 1=0.6%, 2=1.4%, 4=9.0%, 8=75.9%, 16=13.0%, 32=0.0%, >=64=0.0% 00:27:52.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 issued rwts: total=5688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.532 filename0: (groupid=0, jobs=1): err= 0: pid=3025821: Wed Apr 24 21:43:14 2024 00:27:52.532 read: IOPS=640, BW=2562KiB/s (2623kB/s)(25.0MiB/10009msec) 00:27:52.532 slat (nsec): min=6264, max=76524, avg=12694.98, stdev=7410.98 00:27:52.532 clat (usec): min=14846, max=42753, avg=24877.81, stdev=2225.29 00:27:52.532 lat (usec): min=14856, max=42768, avg=24890.51, stdev=2225.69 00:27:52.532 clat percentiles (usec): 00:27:52.532 | 1.00th=[17957], 5.00th=[22938], 10.00th=[23462], 20.00th=[23987], 00:27:52.532 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:27:52.532 | 70.00th=[25297], 80.00th=[25560], 90.00th=[26084], 95.00th=[26608], 00:27:52.532 | 99.00th=[34341], 99.50th=[36439], 99.90th=[40633], 99.95th=[42730], 00:27:52.532 | 99.99th=[42730] 00:27:52.532 bw ( KiB/s): min= 2256, max= 2688, per=4.44%, avg=2555.95, stdev=102.41, samples=19 00:27:52.532 iops : min= 564, max= 672, avg=638.84, stdev=25.59, samples=19 00:27:52.532 lat (msec) : 20=2.59%, 50=97.41% 00:27:52.532 cpu : usr=97.60%, sys=1.98%, ctx=13, majf=0, minf=44 00:27:52.532 IO depths : 1=5.6%, 2=11.1%, 4=23.0%, 8=53.2%, 16=7.0%, 32=0.0%, >=64=0.0% 00:27:52.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 issued rwts: total=6410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.532 filename0: (groupid=0, jobs=1): err= 0: pid=3025822: Wed Apr 24 21:43:14 2024 00:27:52.532 read: IOPS=624, BW=2497KiB/s (2557kB/s)(24.4MiB/10014msec) 00:27:52.532 slat (nsec): min=6300, max=72597, avg=19923.17, stdev=12156.02 00:27:52.532 clat (usec): min=11906, max=48754, avg=25518.74, stdev=3104.37 00:27:52.532 lat (usec): min=11920, max=48781, avg=25538.66, stdev=3103.56 00:27:52.532 clat percentiles (usec): 00:27:52.532 | 1.00th=[17433], 5.00th=[22938], 10.00th=[23462], 20.00th=[23987], 00:27:52.532 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:27:52.532 | 70.00th=[25560], 80.00th=[26084], 90.00th=[28443], 95.00th=[32637], 00:27:52.532 | 99.00th=[35390], 99.50th=[37487], 99.90th=[46400], 99.95th=[46400], 00:27:52.532 | 99.99th=[48497] 00:27:52.532 bw ( KiB/s): min= 2208, max= 2640, per=4.33%, avg=2496.95, stdev=109.13, samples=20 00:27:52.532 iops : min= 552, max= 660, avg=624.10, stdev=27.23, samples=20 00:27:52.532 lat (msec) : 20=2.53%, 50=97.47% 00:27:52.532 cpu : usr=97.27%, sys=2.29%, ctx=15, majf=0, minf=27 00:27:52.532 IO depths : 1=0.3%, 2=0.8%, 4=6.8%, 8=78.8%, 16=13.3%, 32=0.0%, >=64=0.0% 00:27:52.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 complete : 0=0.0%, 4=89.5%, 8=6.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 issued rwts: total=6252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.532 filename0: (groupid=0, jobs=1): err= 0: pid=3025823: Wed Apr 24 21:43:14 2024 00:27:52.532 read: IOPS=571, BW=2285KiB/s (2339kB/s)(22.3MiB/10003msec) 00:27:52.532 slat (nsec): min=6178, max=72669, avg=18834.71, stdev=12531.35 00:27:52.532 clat (usec): min=5058, max=48218, avg=27898.85, stdev=5354.90 00:27:52.532 lat (usec): min=5065, max=48242, avg=27917.68, stdev=5352.92 00:27:52.532 clat percentiles (usec): 00:27:52.532 | 1.00th=[13829], 5.00th=[21627], 10.00th=[23462], 20.00th=[24249], 00:27:52.532 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[28443], 00:27:52.532 | 70.00th=[31589], 80.00th=[33162], 90.00th=[34866], 95.00th=[35914], 00:27:52.532 | 99.00th=[40109], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:27:52.532 | 99.99th=[47973] 00:27:52.532 bw ( KiB/s): min= 1795, max= 2536, per=3.93%, avg=2264.37, stdev=155.55, samples=19 00:27:52.532 iops : min= 448, max= 634, avg=565.89, stdev=38.98, samples=19 00:27:52.532 lat (msec) : 10=0.39%, 20=3.78%, 50=95.83% 00:27:52.532 cpu : usr=97.18%, sys=2.39%, ctx=17, majf=0, minf=32 00:27:52.532 IO depths : 1=1.5%, 2=3.0%, 4=10.8%, 8=72.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:27:52.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 complete : 0=0.0%, 4=90.7%, 8=4.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 issued rwts: total=5713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.532 filename0: (groupid=0, jobs=1): err= 0: pid=3025824: Wed Apr 24 21:43:14 2024 00:27:52.532 read: IOPS=626, BW=2506KiB/s (2566kB/s)(24.5MiB/10007msec) 00:27:52.532 slat (nsec): min=6428, max=77042, avg=24414.00, stdev=11861.52 00:27:52.532 clat (usec): min=3890, max=49127, avg=25393.78, stdev=4130.05 00:27:52.532 lat (usec): min=3902, max=49141, avg=25418.20, stdev=4130.82 00:27:52.532 clat percentiles (usec): 00:27:52.532 | 1.00th=[14877], 5.00th=[20579], 10.00th=[23200], 20.00th=[23725], 00:27:52.532 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[25297], 00:27:52.532 | 70.00th=[25560], 80.00th=[26084], 90.00th=[30540], 95.00th=[33424], 00:27:52.532 | 99.00th=[41681], 99.50th=[42730], 99.90th=[47449], 99.95th=[49021], 00:27:52.532 | 99.99th=[49021] 00:27:52.532 bw ( KiB/s): min= 2256, max= 2768, per=4.34%, avg=2499.26, stdev=108.99, samples=19 00:27:52.532 iops : min= 564, max= 692, avg=624.68, stdev=27.23, samples=19 00:27:52.532 lat (msec) : 4=0.10%, 10=0.67%, 20=3.73%, 50=95.50% 00:27:52.532 cpu : usr=91.96%, sys=4.00%, ctx=64, majf=0, minf=44 00:27:52.532 IO depths : 1=0.7%, 2=1.4%, 4=8.2%, 8=77.1%, 16=12.6%, 32=0.0%, >=64=0.0% 00:27:52.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 complete : 0=0.0%, 4=89.8%, 8=5.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 issued rwts: total=6270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.532 filename1: (groupid=0, jobs=1): err= 0: pid=3025825: Wed Apr 24 21:43:14 2024 00:27:52.532 read: IOPS=580, BW=2323KiB/s (2378kB/s)(22.7MiB/10004msec) 00:27:52.532 slat (nsec): min=5834, max=70241, avg=18264.85, stdev=11364.41 00:27:52.532 clat (usec): min=6947, max=49150, avg=27443.39, stdev=5287.00 00:27:52.532 lat (usec): min=6954, max=49165, avg=27461.66, stdev=5286.11 00:27:52.532 clat percentiles (usec): 00:27:52.532 | 1.00th=[14091], 5.00th=[19792], 10.00th=[23462], 20.00th=[24249], 00:27:52.532 | 30.00th=[24511], 40.00th=[25297], 50.00th=[25560], 60.00th=[26346], 00:27:52.532 | 70.00th=[30802], 80.00th=[32637], 90.00th=[34341], 95.00th=[36439], 00:27:52.532 | 99.00th=[41681], 99.50th=[43779], 99.90th=[47973], 99.95th=[49021], 00:27:52.532 | 99.99th=[49021] 00:27:52.532 bw ( KiB/s): min= 2072, max= 2480, per=4.01%, avg=2307.53, stdev=121.48, samples=19 00:27:52.532 iops : min= 518, max= 620, avg=576.68, stdev=30.36, samples=19 00:27:52.532 lat (msec) : 10=0.33%, 20=4.72%, 50=94.96% 00:27:52.532 cpu : usr=97.20%, sys=2.37%, ctx=17, majf=0, minf=34 00:27:52.532 IO depths : 1=1.0%, 2=2.2%, 4=10.1%, 8=74.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:27:52.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 complete : 0=0.0%, 4=90.3%, 8=4.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 issued rwts: total=5809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.532 filename1: (groupid=0, jobs=1): err= 0: pid=3025826: Wed Apr 24 21:43:14 2024 00:27:52.532 read: IOPS=573, BW=2294KiB/s (2349kB/s)(22.4MiB/10014msec) 00:27:52.532 slat (nsec): min=5288, max=69572, avg=19115.97, stdev=12164.89 00:27:52.532 clat (usec): min=11146, max=48174, avg=27775.36, stdev=5242.33 00:27:52.532 lat (usec): min=11153, max=48201, avg=27794.47, stdev=5243.26 00:27:52.532 clat percentiles (usec): 00:27:52.532 | 1.00th=[15795], 5.00th=[19792], 10.00th=[23462], 20.00th=[24249], 00:27:52.532 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[27132], 00:27:52.532 | 70.00th=[31327], 80.00th=[33162], 90.00th=[34866], 95.00th=[36439], 00:27:52.532 | 99.00th=[41157], 99.50th=[43254], 99.90th=[46400], 99.95th=[47973], 00:27:52.532 | 99.99th=[47973] 00:27:52.532 bw ( KiB/s): min= 2048, max= 2432, per=3.99%, avg=2296.37, stdev=102.12, samples=19 00:27:52.532 iops : min= 512, max= 608, avg=573.89, stdev=25.61, samples=19 00:27:52.532 lat (msec) : 20=5.36%, 50=94.64% 00:27:52.532 cpu : usr=97.13%, sys=2.43%, ctx=17, majf=0, minf=29 00:27:52.532 IO depths : 1=1.5%, 2=3.0%, 4=11.2%, 8=72.4%, 16=12.0%, 32=0.0%, >=64=0.0% 00:27:52.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 complete : 0=0.0%, 4=90.6%, 8=4.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 issued rwts: total=5742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.532 filename1: (groupid=0, jobs=1): err= 0: pid=3025827: Wed Apr 24 21:43:14 2024 00:27:52.532 read: IOPS=595, BW=2383KiB/s (2441kB/s)(23.3MiB/10014msec) 00:27:52.532 slat (nsec): min=6230, max=74100, avg=19452.71, stdev=12061.86 00:27:52.532 clat (usec): min=11678, max=47337, avg=26735.09, stdev=4676.88 00:27:52.532 lat (usec): min=11699, max=47349, avg=26754.54, stdev=4676.46 00:27:52.532 clat percentiles (usec): 00:27:52.532 | 1.00th=[15533], 5.00th=[19792], 10.00th=[23462], 20.00th=[23987], 00:27:52.532 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25297], 60.00th=[25822], 00:27:52.532 | 70.00th=[26608], 80.00th=[31327], 90.00th=[33817], 95.00th=[35390], 00:27:52.532 | 99.00th=[40109], 99.50th=[41681], 99.90th=[42730], 99.95th=[43779], 00:27:52.532 | 99.99th=[47449] 00:27:52.532 bw ( KiB/s): min= 2171, max= 2554, per=4.13%, avg=2377.58, stdev=87.27, samples=19 00:27:52.532 iops : min= 542, max= 638, avg=594.21, stdev=21.89, samples=19 00:27:52.532 lat (msec) : 20=5.08%, 50=94.92% 00:27:52.532 cpu : usr=96.93%, sys=2.58%, ctx=17, majf=0, minf=43 00:27:52.532 IO depths : 1=0.7%, 2=1.4%, 4=8.4%, 8=76.2%, 16=13.3%, 32=0.0%, >=64=0.0% 00:27:52.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 complete : 0=0.0%, 4=90.1%, 8=5.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 issued rwts: total=5967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.532 filename1: (groupid=0, jobs=1): err= 0: pid=3025828: Wed Apr 24 21:43:14 2024 00:27:52.532 read: IOPS=593, BW=2373KiB/s (2430kB/s)(23.2MiB/10007msec) 00:27:52.532 slat (nsec): min=6146, max=72234, avg=19910.43, stdev=12427.94 00:27:52.532 clat (usec): min=10396, max=59006, avg=26856.19, stdev=5226.57 00:27:52.532 lat (usec): min=10403, max=59023, avg=26876.10, stdev=5226.41 00:27:52.532 clat percentiles (usec): 00:27:52.532 | 1.00th=[14615], 5.00th=[19006], 10.00th=[23200], 20.00th=[23987], 00:27:52.532 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25297], 60.00th=[25822], 00:27:52.532 | 70.00th=[27132], 80.00th=[31589], 90.00th=[34341], 95.00th=[36439], 00:27:52.532 | 99.00th=[41157], 99.50th=[44827], 99.90th=[52691], 99.95th=[58983], 00:27:52.532 | 99.99th=[58983] 00:27:52.532 bw ( KiB/s): min= 2152, max= 2488, per=4.11%, avg=2366.00, stdev=93.93, samples=19 00:27:52.532 iops : min= 538, max= 622, avg=591.42, stdev=23.46, samples=19 00:27:52.532 lat (msec) : 20=5.79%, 50=93.87%, 100=0.34% 00:27:52.532 cpu : usr=97.40%, sys=2.18%, ctx=15, majf=0, minf=42 00:27:52.532 IO depths : 1=0.6%, 2=1.1%, 4=8.3%, 8=76.9%, 16=13.1%, 32=0.0%, >=64=0.0% 00:27:52.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 complete : 0=0.0%, 4=89.9%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 issued rwts: total=5937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.532 filename1: (groupid=0, jobs=1): err= 0: pid=3025829: Wed Apr 24 21:43:14 2024 00:27:52.532 read: IOPS=627, BW=2510KiB/s (2570kB/s)(24.6MiB/10026msec) 00:27:52.532 slat (nsec): min=6174, max=70323, avg=16465.42, stdev=10898.81 00:27:52.532 clat (usec): min=3809, max=61626, avg=25391.07, stdev=4124.43 00:27:52.532 lat (usec): min=3819, max=61641, avg=25407.54, stdev=4125.99 00:27:52.532 clat percentiles (usec): 00:27:52.532 | 1.00th=[14484], 5.00th=[18744], 10.00th=[22938], 20.00th=[23725], 00:27:52.532 | 30.00th=[24249], 40.00th=[24511], 50.00th=[25035], 60.00th=[25297], 00:27:52.532 | 70.00th=[25822], 80.00th=[26346], 90.00th=[31327], 95.00th=[32900], 00:27:52.532 | 99.00th=[36963], 99.50th=[38536], 99.90th=[43779], 99.95th=[44303], 00:27:52.532 | 99.99th=[61604] 00:27:52.532 bw ( KiB/s): min= 2176, max= 2821, per=4.36%, avg=2510.15, stdev=137.84, samples=20 00:27:52.532 iops : min= 544, max= 705, avg=627.40, stdev=34.45, samples=20 00:27:52.532 lat (msec) : 4=0.14%, 10=0.62%, 20=5.60%, 50=93.63%, 100=0.02% 00:27:52.532 cpu : usr=97.01%, sys=2.54%, ctx=20, majf=0, minf=33 00:27:52.532 IO depths : 1=0.8%, 2=1.6%, 4=8.0%, 8=76.7%, 16=12.8%, 32=0.0%, >=64=0.0% 00:27:52.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 issued rwts: total=6291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.532 filename1: (groupid=0, jobs=1): err= 0: pid=3025830: Wed Apr 24 21:43:14 2024 00:27:52.532 read: IOPS=597, BW=2389KiB/s (2446kB/s)(23.4MiB/10014msec) 00:27:52.532 slat (nsec): min=6232, max=72731, avg=17546.07, stdev=11095.88 00:27:52.532 clat (usec): min=10693, max=48256, avg=26687.87, stdev=5104.89 00:27:52.532 lat (usec): min=10701, max=48272, avg=26705.41, stdev=5105.62 00:27:52.532 clat percentiles (usec): 00:27:52.532 | 1.00th=[14746], 5.00th=[18744], 10.00th=[22414], 20.00th=[23987], 00:27:52.532 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25297], 60.00th=[25822], 00:27:52.532 | 70.00th=[27395], 80.00th=[31851], 90.00th=[33817], 95.00th=[35914], 00:27:52.532 | 99.00th=[40109], 99.50th=[42206], 99.90th=[43779], 99.95th=[47973], 00:27:52.532 | 99.99th=[48497] 00:27:52.532 bw ( KiB/s): min= 2048, max= 2536, per=4.14%, avg=2384.70, stdev=119.19, samples=20 00:27:52.532 iops : min= 512, max= 634, avg=596.00, stdev=29.78, samples=20 00:27:52.532 lat (msec) : 20=6.89%, 50=93.11% 00:27:52.532 cpu : usr=97.07%, sys=2.49%, ctx=14, majf=0, minf=32 00:27:52.532 IO depths : 1=0.8%, 2=1.7%, 4=9.6%, 8=74.9%, 16=13.0%, 32=0.0%, >=64=0.0% 00:27:52.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 complete : 0=0.0%, 4=90.4%, 8=5.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 issued rwts: total=5981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.532 filename1: (groupid=0, jobs=1): err= 0: pid=3025831: Wed Apr 24 21:43:14 2024 00:27:52.532 read: IOPS=570, BW=2281KiB/s (2336kB/s)(22.3MiB/10009msec) 00:27:52.532 slat (nsec): min=6157, max=75080, avg=24503.67, stdev=12540.08 00:27:52.532 clat (usec): min=12555, max=51640, avg=27909.97, stdev=5357.92 00:27:52.532 lat (usec): min=12575, max=51664, avg=27934.47, stdev=5358.28 00:27:52.532 clat percentiles (usec): 00:27:52.532 | 1.00th=[15533], 5.00th=[21627], 10.00th=[23725], 20.00th=[24249], 00:27:52.532 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[27395], 00:27:52.532 | 70.00th=[31065], 80.00th=[32900], 90.00th=[34341], 95.00th=[36439], 00:27:52.532 | 99.00th=[43779], 99.50th=[48497], 99.90th=[51643], 99.95th=[51643], 00:27:52.532 | 99.99th=[51643] 00:27:52.532 bw ( KiB/s): min= 2048, max= 2448, per=3.98%, avg=2289.21, stdev=108.30, samples=19 00:27:52.532 iops : min= 512, max= 612, avg=572.11, stdev=27.16, samples=19 00:27:52.532 lat (msec) : 20=4.59%, 50=95.23%, 100=0.18% 00:27:52.532 cpu : usr=97.64%, sys=1.92%, ctx=56, majf=0, minf=37 00:27:52.532 IO depths : 1=0.7%, 2=1.6%, 4=9.2%, 8=75.8%, 16=12.7%, 32=0.0%, >=64=0.0% 00:27:52.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 complete : 0=0.0%, 4=90.3%, 8=4.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 issued rwts: total=5708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.532 filename1: (groupid=0, jobs=1): err= 0: pid=3025832: Wed Apr 24 21:43:14 2024 00:27:52.532 read: IOPS=585, BW=2343KiB/s (2399kB/s)(22.9MiB/10010msec) 00:27:52.532 slat (nsec): min=6204, max=74261, avg=19912.29, stdev=12404.19 00:27:52.532 clat (usec): min=9777, max=63059, avg=27192.37, stdev=5067.61 00:27:52.532 lat (usec): min=9790, max=63077, avg=27212.28, stdev=5066.21 00:27:52.532 clat percentiles (usec): 00:27:52.532 | 1.00th=[15533], 5.00th=[21365], 10.00th=[23462], 20.00th=[24249], 00:27:52.532 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25560], 60.00th=[25822], 00:27:52.532 | 70.00th=[29230], 80.00th=[31851], 90.00th=[34341], 95.00th=[36439], 00:27:52.532 | 99.00th=[41681], 99.50th=[43779], 99.90th=[55313], 99.95th=[55313], 00:27:52.532 | 99.99th=[63177] 00:27:52.532 bw ( KiB/s): min= 2128, max= 2480, per=4.04%, avg=2327.95, stdev=111.92, samples=19 00:27:52.532 iops : min= 532, max= 620, avg=581.79, stdev=28.01, samples=19 00:27:52.532 lat (msec) : 10=0.05%, 20=4.21%, 50=95.46%, 100=0.27% 00:27:52.532 cpu : usr=97.49%, sys=2.09%, ctx=14, majf=0, minf=36 00:27:52.532 IO depths : 1=0.7%, 2=1.5%, 4=8.7%, 8=76.2%, 16=12.9%, 32=0.0%, >=64=0.0% 00:27:52.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 issued rwts: total=5864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.532 filename2: (groupid=0, jobs=1): err= 0: pid=3025833: Wed Apr 24 21:43:14 2024 00:27:52.532 read: IOPS=614, BW=2456KiB/s (2515kB/s)(24.0MiB/10004msec) 00:27:52.532 slat (nsec): min=6207, max=70621, avg=17771.85, stdev=12732.18 00:27:52.532 clat (usec): min=5413, max=64867, avg=25971.69, stdev=4082.82 00:27:52.532 lat (usec): min=5419, max=64882, avg=25989.46, stdev=4081.99 00:27:52.532 clat percentiles (usec): 00:27:52.532 | 1.00th=[16188], 5.00th=[22938], 10.00th=[23462], 20.00th=[23987], 00:27:52.532 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25560], 00:27:52.532 | 70.00th=[25822], 80.00th=[26346], 90.00th=[31589], 95.00th=[33817], 00:27:52.532 | 99.00th=[41157], 99.50th=[45351], 99.90th=[56886], 99.95th=[56886], 00:27:52.532 | 99.99th=[64750] 00:27:52.532 bw ( KiB/s): min= 2176, max= 2656, per=4.25%, avg=2450.47, stdev=141.41, samples=19 00:27:52.532 iops : min= 544, max= 664, avg=612.47, stdev=35.37, samples=19 00:27:52.532 lat (msec) : 10=0.37%, 20=1.50%, 50=97.80%, 100=0.33% 00:27:52.532 cpu : usr=97.20%, sys=2.35%, ctx=17, majf=0, minf=44 00:27:52.532 IO depths : 1=0.1%, 2=0.3%, 4=5.6%, 8=77.8%, 16=16.3%, 32=0.0%, >=64=0.0% 00:27:52.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 complete : 0=0.0%, 4=91.4%, 8=4.8%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.532 issued rwts: total=6143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.532 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.532 filename2: (groupid=0, jobs=1): err= 0: pid=3025834: Wed Apr 24 21:43:14 2024 00:27:52.533 read: IOPS=557, BW=2232KiB/s (2285kB/s)(21.8MiB/10005msec) 00:27:52.533 slat (nsec): min=6112, max=70926, avg=17498.86, stdev=11868.64 00:27:52.533 clat (usec): min=4949, max=55869, avg=28561.34, stdev=5620.65 00:27:52.533 lat (usec): min=4956, max=55891, avg=28578.84, stdev=5619.11 00:27:52.533 clat percentiles (usec): 00:27:52.533 | 1.00th=[14353], 5.00th=[22676], 10.00th=[23725], 20.00th=[24511], 00:27:52.533 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26608], 60.00th=[30802], 00:27:52.533 | 70.00th=[32113], 80.00th=[33817], 90.00th=[35390], 95.00th=[36439], 00:27:52.533 | 99.00th=[41157], 99.50th=[44827], 99.90th=[55837], 99.95th=[55837], 00:27:52.533 | 99.99th=[55837] 00:27:52.533 bw ( KiB/s): min= 1920, max= 2412, per=3.86%, avg=2222.32, stdev=168.41, samples=19 00:27:52.533 iops : min= 480, max= 603, avg=555.42, stdev=42.03, samples=19 00:27:52.533 lat (msec) : 10=0.47%, 20=3.87%, 50=95.38%, 100=0.29% 00:27:52.533 cpu : usr=97.12%, sys=2.45%, ctx=17, majf=0, minf=49 00:27:52.533 IO depths : 1=1.8%, 2=3.8%, 4=12.1%, 8=70.8%, 16=11.5%, 32=0.0%, >=64=0.0% 00:27:52.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.533 complete : 0=0.0%, 4=90.9%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.533 issued rwts: total=5582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.533 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.533 filename2: (groupid=0, jobs=1): err= 0: pid=3025835: Wed Apr 24 21:43:14 2024 00:27:52.533 read: IOPS=588, BW=2355KiB/s (2411kB/s)(23.0MiB/10018msec) 00:27:52.533 slat (nsec): min=6230, max=76932, avg=18661.03, stdev=11921.46 00:27:52.533 clat (usec): min=11192, max=48466, avg=27062.59, stdev=5134.53 00:27:52.533 lat (usec): min=11204, max=48492, avg=27081.25, stdev=5135.30 00:27:52.533 clat percentiles (usec): 00:27:52.533 | 1.00th=[15008], 5.00th=[19268], 10.00th=[22938], 20.00th=[23987], 00:27:52.533 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:27:52.533 | 70.00th=[29492], 80.00th=[32113], 90.00th=[34341], 95.00th=[35914], 00:27:52.533 | 99.00th=[41681], 99.50th=[43254], 99.90th=[44827], 99.95th=[48497], 00:27:52.533 | 99.99th=[48497] 00:27:52.533 bw ( KiB/s): min= 2048, max= 2560, per=4.08%, avg=2352.75, stdev=127.41, samples=20 00:27:52.533 iops : min= 512, max= 640, avg=588.00, stdev=31.86, samples=20 00:27:52.533 lat (msec) : 20=5.95%, 50=94.05% 00:27:52.533 cpu : usr=96.64%, sys=2.93%, ctx=15, majf=0, minf=30 00:27:52.533 IO depths : 1=1.1%, 2=2.2%, 4=9.5%, 8=75.0%, 16=12.3%, 32=0.0%, >=64=0.0% 00:27:52.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.533 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.533 issued rwts: total=5897,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.533 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.533 filename2: (groupid=0, jobs=1): err= 0: pid=3025836: Wed Apr 24 21:43:14 2024 00:27:52.533 read: IOPS=601, BW=2408KiB/s (2465kB/s)(23.6MiB/10022msec) 00:27:52.533 slat (nsec): min=2941, max=71536, avg=14101.07, stdev=9021.53 00:27:52.533 clat (usec): min=10097, max=60872, avg=26503.38, stdev=4955.34 00:27:52.533 lat (usec): min=10105, max=60889, avg=26517.48, stdev=4956.05 00:27:52.533 clat percentiles (usec): 00:27:52.533 | 1.00th=[15270], 5.00th=[20317], 10.00th=[23462], 20.00th=[23987], 00:27:52.533 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25297], 60.00th=[25560], 00:27:52.533 | 70.00th=[26084], 80.00th=[29754], 90.00th=[33162], 95.00th=[35390], 00:27:52.533 | 99.00th=[43254], 99.50th=[47973], 99.90th=[61080], 99.95th=[61080], 00:27:52.533 | 99.99th=[61080] 00:27:52.533 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2407.55, stdev=126.39, samples=20 00:27:52.533 iops : min= 544, max= 672, avg=601.85, stdev=31.59, samples=20 00:27:52.533 lat (msec) : 20=4.81%, 50=94.93%, 100=0.27% 00:27:52.533 cpu : usr=97.25%, sys=2.32%, ctx=21, majf=0, minf=46 00:27:52.533 IO depths : 1=0.4%, 2=0.8%, 4=6.6%, 8=78.8%, 16=13.4%, 32=0.0%, >=64=0.0% 00:27:52.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.533 complete : 0=0.0%, 4=89.6%, 8=6.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.533 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.533 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.533 filename2: (groupid=0, jobs=1): err= 0: pid=3025837: Wed Apr 24 21:43:14 2024 00:27:52.533 read: IOPS=742, BW=2969KiB/s (3040kB/s)(29.0MiB/10018msec) 00:27:52.533 slat (nsec): min=5119, max=66104, avg=10157.67, stdev=5895.13 00:27:52.533 clat (usec): min=3485, max=46099, avg=21482.29, stdev=5310.28 00:27:52.533 lat (usec): min=3491, max=46121, avg=21492.45, stdev=5311.61 00:27:52.533 clat percentiles (usec): 00:27:52.533 | 1.00th=[ 7111], 5.00th=[13304], 10.00th=[14615], 20.00th=[16581], 00:27:52.533 | 30.00th=[17957], 40.00th=[19268], 50.00th=[23725], 60.00th=[24249], 00:27:52.533 | 70.00th=[24773], 80.00th=[25297], 90.00th=[25822], 95.00th=[27132], 00:27:52.533 | 99.00th=[35390], 99.50th=[38536], 99.90th=[43779], 99.95th=[43779], 00:27:52.533 | 99.99th=[45876] 00:27:52.533 bw ( KiB/s): min= 2384, max= 3488, per=5.15%, avg=2966.60, stdev=234.34, samples=20 00:27:52.533 iops : min= 596, max= 872, avg=741.60, stdev=58.56, samples=20 00:27:52.533 lat (msec) : 4=0.31%, 10=0.85%, 20=40.24%, 50=58.60% 00:27:52.533 cpu : usr=97.21%, sys=2.36%, ctx=13, majf=0, minf=58 00:27:52.533 IO depths : 1=2.4%, 2=7.1%, 4=20.1%, 8=60.1%, 16=10.2%, 32=0.0%, >=64=0.0% 00:27:52.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.533 complete : 0=0.0%, 4=92.9%, 8=1.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.533 issued rwts: total=7435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.533 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.533 filename2: (groupid=0, jobs=1): err= 0: pid=3025838: Wed Apr 24 21:43:14 2024 00:27:52.533 read: IOPS=596, BW=2386KiB/s (2443kB/s)(23.3MiB/10018msec) 00:27:52.533 slat (nsec): min=6215, max=77264, avg=17641.92, stdev=11327.34 00:27:52.533 clat (usec): min=11858, max=46272, avg=26710.13, stdev=5149.46 00:27:52.533 lat (usec): min=11872, max=46290, avg=26727.77, stdev=5150.37 00:27:52.533 clat percentiles (usec): 00:27:52.533 | 1.00th=[14484], 5.00th=[17957], 10.00th=[22414], 20.00th=[23987], 00:27:52.533 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25297], 60.00th=[25822], 00:27:52.533 | 70.00th=[27132], 80.00th=[31851], 90.00th=[34341], 95.00th=[36439], 00:27:52.533 | 99.00th=[40109], 99.50th=[41681], 99.90th=[44303], 99.95th=[46400], 00:27:52.533 | 99.99th=[46400] 00:27:52.533 bw ( KiB/s): min= 2176, max= 2568, per=4.14%, avg=2384.35, stdev=108.28, samples=20 00:27:52.533 iops : min= 544, max= 642, avg=595.90, stdev=27.01, samples=20 00:27:52.533 lat (msec) : 20=7.41%, 50=92.59% 00:27:52.533 cpu : usr=97.05%, sys=2.52%, ctx=16, majf=0, minf=28 00:27:52.533 IO depths : 1=1.0%, 2=2.0%, 4=9.3%, 8=74.6%, 16=13.1%, 32=0.0%, >=64=0.0% 00:27:52.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.533 complete : 0=0.0%, 4=90.3%, 8=5.5%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.533 issued rwts: total=5976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.533 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.533 filename2: (groupid=0, jobs=1): err= 0: pid=3025839: Wed Apr 24 21:43:14 2024 00:27:52.533 read: IOPS=587, BW=2351KiB/s (2407kB/s)(23.0MiB/10010msec) 00:27:52.533 slat (nsec): min=6034, max=71124, avg=19383.37, stdev=11969.26 00:27:52.533 clat (usec): min=10548, max=61987, avg=27110.70, stdev=5361.51 00:27:52.533 lat (usec): min=10561, max=62004, avg=27130.09, stdev=5360.68 00:27:52.533 clat percentiles (usec): 00:27:52.533 | 1.00th=[15139], 5.00th=[18744], 10.00th=[22938], 20.00th=[23987], 00:27:52.533 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:27:52.533 | 70.00th=[28967], 80.00th=[31851], 90.00th=[34341], 95.00th=[36439], 00:27:52.533 | 99.00th=[42730], 99.50th=[46400], 99.90th=[55313], 99.95th=[62129], 00:27:52.533 | 99.99th=[62129] 00:27:52.533 bw ( KiB/s): min= 1916, max= 2512, per=4.06%, avg=2336.68, stdev=131.16, samples=19 00:27:52.533 iops : min= 479, max= 628, avg=584.05, stdev=32.80, samples=19 00:27:52.533 lat (msec) : 20=5.83%, 50=93.90%, 100=0.27% 00:27:52.533 cpu : usr=97.27%, sys=2.29%, ctx=12, majf=0, minf=34 00:27:52.533 IO depths : 1=0.8%, 2=1.7%, 4=8.7%, 8=76.1%, 16=12.7%, 32=0.0%, >=64=0.0% 00:27:52.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.533 complete : 0=0.0%, 4=90.1%, 8=5.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.533 issued rwts: total=5883,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.533 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.533 filename2: (groupid=0, jobs=1): err= 0: pid=3025840: Wed Apr 24 21:43:14 2024 00:27:52.533 read: IOPS=612, BW=2452KiB/s (2511kB/s)(24.0MiB/10005msec) 00:27:52.533 slat (nsec): min=4942, max=72169, avg=17998.52, stdev=11555.06 00:27:52.533 clat (usec): min=6749, max=69815, avg=25978.18, stdev=5375.14 00:27:52.533 lat (usec): min=6757, max=69830, avg=25996.18, stdev=5375.48 00:27:52.533 clat percentiles (usec): 00:27:52.533 | 1.00th=[13435], 5.00th=[17171], 10.00th=[21890], 20.00th=[23725], 00:27:52.533 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25035], 60.00th=[25560], 00:27:52.533 | 70.00th=[26084], 80.00th=[29230], 90.00th=[33162], 95.00th=[35390], 00:27:52.533 | 99.00th=[42206], 99.50th=[43779], 99.90th=[61604], 99.95th=[61604], 00:27:52.533 | 99.99th=[69731] 00:27:52.533 bw ( KiB/s): min= 2096, max= 2672, per=4.23%, avg=2434.95, stdev=144.19, samples=19 00:27:52.533 iops : min= 524, max= 668, avg=608.58, stdev=36.01, samples=19 00:27:52.533 lat (msec) : 10=0.16%, 20=8.67%, 50=90.84%, 100=0.33% 00:27:52.533 cpu : usr=97.38%, sys=2.21%, ctx=12, majf=0, minf=36 00:27:52.533 IO depths : 1=0.6%, 2=3.5%, 4=14.2%, 8=68.6%, 16=13.2%, 32=0.0%, >=64=0.0% 00:27:52.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.533 complete : 0=0.0%, 4=91.7%, 8=3.9%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.533 issued rwts: total=6133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.533 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.533 00:27:52.533 Run status group 0 (all jobs): 00:27:52.533 READ: bw=56.2MiB/s (59.0MB/s), 2232KiB/s-2969KiB/s (2285kB/s-3040kB/s), io=564MiB (591MB), run=10003-10026msec 00:27:52.533 21:43:14 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:52.533 21:43:14 -- target/dif.sh@43 -- # local sub 00:27:52.533 21:43:14 -- target/dif.sh@45 -- # for sub in "$@" 00:27:52.533 21:43:14 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:52.533 21:43:14 -- target/dif.sh@36 -- # local sub_id=0 00:27:52.533 21:43:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:52.533 21:43:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.533 21:43:14 -- common/autotest_common.sh@10 -- # set +x 00:27:52.533 21:43:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.533 21:43:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:52.533 21:43:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.533 21:43:14 -- common/autotest_common.sh@10 -- # set +x 00:27:52.533 21:43:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.533 21:43:14 -- target/dif.sh@45 -- # for sub in "$@" 00:27:52.533 21:43:14 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:52.533 21:43:14 -- target/dif.sh@36 -- # local sub_id=1 00:27:52.533 21:43:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:52.533 21:43:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.533 21:43:14 -- common/autotest_common.sh@10 -- # set +x 00:27:52.533 21:43:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.533 21:43:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:52.533 21:43:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.533 21:43:14 -- common/autotest_common.sh@10 -- # set +x 00:27:52.533 21:43:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.533 21:43:14 -- target/dif.sh@45 -- # for sub in "$@" 00:27:52.533 21:43:14 -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:52.533 21:43:14 -- target/dif.sh@36 -- # local sub_id=2 00:27:52.533 21:43:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:52.533 21:43:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.533 21:43:14 -- common/autotest_common.sh@10 -- # set +x 00:27:52.533 21:43:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.533 21:43:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:52.533 21:43:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.533 21:43:14 -- common/autotest_common.sh@10 -- # set +x 00:27:52.533 21:43:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.533 21:43:14 -- target/dif.sh@115 -- # NULL_DIF=1 00:27:52.533 21:43:14 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:52.533 21:43:14 -- target/dif.sh@115 -- # numjobs=2 00:27:52.533 21:43:14 -- target/dif.sh@115 -- # iodepth=8 00:27:52.533 21:43:14 -- target/dif.sh@115 -- # runtime=5 00:27:52.533 21:43:14 -- target/dif.sh@115 -- # files=1 00:27:52.533 21:43:14 -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:52.533 21:43:14 -- target/dif.sh@28 -- # local sub 00:27:52.533 21:43:14 -- target/dif.sh@30 -- # for sub in "$@" 00:27:52.533 21:43:14 -- target/dif.sh@31 -- # create_subsystem 0 00:27:52.533 21:43:14 -- target/dif.sh@18 -- # local sub_id=0 00:27:52.533 21:43:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:52.533 21:43:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.533 21:43:14 -- common/autotest_common.sh@10 -- # set +x 00:27:52.533 bdev_null0 00:27:52.533 21:43:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.533 21:43:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:52.533 21:43:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.533 21:43:14 -- common/autotest_common.sh@10 -- # set +x 00:27:52.533 21:43:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.533 21:43:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:52.533 21:43:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.533 21:43:14 -- common/autotest_common.sh@10 -- # set +x 00:27:52.533 21:43:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.533 21:43:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:52.533 21:43:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.533 21:43:14 -- common/autotest_common.sh@10 -- # set +x 00:27:52.533 [2024-04-24 21:43:14.452946] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.533 21:43:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.533 21:43:14 -- target/dif.sh@30 -- # for sub in "$@" 00:27:52.533 21:43:14 -- target/dif.sh@31 -- # create_subsystem 1 00:27:52.533 21:43:14 -- target/dif.sh@18 -- # local sub_id=1 00:27:52.533 21:43:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:52.533 21:43:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.533 21:43:14 -- common/autotest_common.sh@10 -- # set +x 00:27:52.533 bdev_null1 00:27:52.533 21:43:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.533 21:43:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:52.533 21:43:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.533 21:43:14 -- common/autotest_common.sh@10 -- # set +x 00:27:52.533 21:43:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.533 21:43:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:52.533 21:43:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.533 21:43:14 -- common/autotest_common.sh@10 -- # set +x 00:27:52.533 21:43:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.533 21:43:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.533 21:43:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.533 21:43:14 -- common/autotest_common.sh@10 -- # set +x 00:27:52.533 21:43:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.533 21:43:14 -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:52.533 21:43:14 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:52.533 21:43:14 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:52.533 21:43:14 -- nvmf/common.sh@521 -- # config=() 00:27:52.533 21:43:14 -- nvmf/common.sh@521 -- # local subsystem config 00:27:52.533 21:43:14 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:52.533 21:43:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:52.533 21:43:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:52.533 { 00:27:52.533 "params": { 00:27:52.533 "name": "Nvme$subsystem", 00:27:52.533 "trtype": "$TEST_TRANSPORT", 00:27:52.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.533 "adrfam": "ipv4", 00:27:52.533 "trsvcid": "$NVMF_PORT", 00:27:52.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.533 "hdgst": ${hdgst:-false}, 00:27:52.533 "ddgst": ${ddgst:-false} 00:27:52.533 }, 00:27:52.533 "method": "bdev_nvme_attach_controller" 00:27:52.533 } 00:27:52.533 EOF 00:27:52.533 )") 00:27:52.533 21:43:14 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:52.533 21:43:14 -- target/dif.sh@82 -- # gen_fio_conf 00:27:52.533 21:43:14 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:52.533 21:43:14 -- target/dif.sh@54 -- # local file 00:27:52.533 21:43:14 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:52.533 21:43:14 -- target/dif.sh@56 -- # cat 00:27:52.533 21:43:14 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:52.533 21:43:14 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:52.533 21:43:14 -- common/autotest_common.sh@1327 -- # shift 00:27:52.533 21:43:14 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:52.533 21:43:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:52.533 21:43:14 -- nvmf/common.sh@543 -- # cat 00:27:52.533 21:43:14 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:52.533 21:43:14 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:52.533 21:43:14 -- target/dif.sh@72 -- # (( file <= files )) 00:27:52.533 21:43:14 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:52.533 21:43:14 -- target/dif.sh@73 -- # cat 00:27:52.533 21:43:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:52.533 21:43:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:52.533 21:43:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:52.533 { 00:27:52.533 "params": { 00:27:52.533 "name": "Nvme$subsystem", 00:27:52.533 "trtype": "$TEST_TRANSPORT", 00:27:52.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.533 "adrfam": "ipv4", 00:27:52.533 "trsvcid": "$NVMF_PORT", 00:27:52.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.533 "hdgst": ${hdgst:-false}, 00:27:52.533 "ddgst": ${ddgst:-false} 00:27:52.533 }, 00:27:52.533 "method": "bdev_nvme_attach_controller" 00:27:52.533 } 00:27:52.533 EOF 00:27:52.533 )") 00:27:52.533 21:43:14 -- target/dif.sh@72 -- # (( file++ )) 00:27:52.533 21:43:14 -- nvmf/common.sh@543 -- # cat 00:27:52.533 21:43:14 -- target/dif.sh@72 -- # (( file <= files )) 00:27:52.533 21:43:14 -- nvmf/common.sh@545 -- # jq . 00:27:52.533 21:43:14 -- nvmf/common.sh@546 -- # IFS=, 00:27:52.533 21:43:14 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:52.533 "params": { 00:27:52.533 "name": "Nvme0", 00:27:52.533 "trtype": "tcp", 00:27:52.533 "traddr": "10.0.0.2", 00:27:52.533 "adrfam": "ipv4", 00:27:52.533 "trsvcid": "4420", 00:27:52.534 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:52.534 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:52.534 "hdgst": false, 00:27:52.534 "ddgst": false 00:27:52.534 }, 00:27:52.534 "method": "bdev_nvme_attach_controller" 00:27:52.534 },{ 00:27:52.534 "params": { 00:27:52.534 "name": "Nvme1", 00:27:52.534 "trtype": "tcp", 00:27:52.534 "traddr": "10.0.0.2", 00:27:52.534 "adrfam": "ipv4", 00:27:52.534 "trsvcid": "4420", 00:27:52.534 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:52.534 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:52.534 "hdgst": false, 00:27:52.534 "ddgst": false 00:27:52.534 }, 00:27:52.534 "method": "bdev_nvme_attach_controller" 00:27:52.534 }' 00:27:52.534 21:43:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:52.534 21:43:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:52.534 21:43:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:52.534 21:43:14 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:52.534 21:43:14 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:52.534 21:43:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:52.534 21:43:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:52.534 21:43:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:52.534 21:43:14 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:52.534 21:43:14 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:52.534 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:52.534 ... 00:27:52.534 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:52.534 ... 00:27:52.534 fio-3.35 00:27:52.534 Starting 4 threads 00:27:52.534 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.795 00:27:57.795 filename0: (groupid=0, jobs=1): err= 0: pid=3027831: Wed Apr 24 21:43:20 2024 00:27:57.795 read: IOPS=2703, BW=21.1MiB/s (22.1MB/s)(106MiB/5003msec) 00:27:57.795 slat (nsec): min=2693, max=59072, avg=8114.72, stdev=2622.32 00:27:57.795 clat (usec): min=1755, max=7505, avg=2938.47, stdev=407.13 00:27:57.795 lat (usec): min=1761, max=7514, avg=2946.59, stdev=407.05 00:27:57.795 clat percentiles (usec): 00:27:57.795 | 1.00th=[ 2057], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2606], 00:27:57.795 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:27:57.795 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3392], 95.00th=[ 3621], 00:27:57.795 | 99.00th=[ 4015], 99.50th=[ 4146], 99.90th=[ 4752], 99.95th=[ 7177], 00:27:57.795 | 99.99th=[ 7504] 00:27:57.795 bw ( KiB/s): min=21376, max=21888, per=25.24%, avg=21632.00, stdev=156.22, samples=10 00:27:57.795 iops : min= 2672, max= 2736, avg=2704.00, stdev=19.53, samples=10 00:27:57.795 lat (msec) : 2=0.65%, 4=98.33%, 10=1.02% 00:27:57.795 cpu : usr=92.68%, sys=6.96%, ctx=7, majf=0, minf=18 00:27:57.795 IO depths : 1=0.1%, 2=0.8%, 4=66.2%, 8=33.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.795 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.795 issued rwts: total=13528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.795 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:57.795 filename0: (groupid=0, jobs=1): err= 0: pid=3027832: Wed Apr 24 21:43:20 2024 00:27:57.795 read: IOPS=2663, BW=20.8MiB/s (21.8MB/s)(104MiB/5002msec) 00:27:57.795 slat (nsec): min=3847, max=26682, avg=8285.55, stdev=2645.64 00:27:57.795 clat (usec): min=1812, max=46575, avg=2982.98, stdev=1138.75 00:27:57.795 lat (usec): min=1817, max=46587, avg=2991.27, stdev=1138.64 00:27:57.795 clat percentiles (usec): 00:27:57.795 | 1.00th=[ 2089], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2671], 00:27:57.795 | 30.00th=[ 2802], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:27:57.795 | 70.00th=[ 3130], 80.00th=[ 3228], 90.00th=[ 3425], 95.00th=[ 3621], 00:27:57.795 | 99.00th=[ 4015], 99.50th=[ 4228], 99.90th=[ 5080], 99.95th=[46400], 00:27:57.795 | 99.99th=[46400] 00:27:57.795 bw ( KiB/s): min=19456, max=21760, per=24.86%, avg=21307.20, stdev=713.59, samples=10 00:27:57.795 iops : min= 2432, max= 2720, avg=2663.40, stdev=89.20, samples=10 00:27:57.795 lat (msec) : 2=0.47%, 4=98.44%, 10=1.03%, 50=0.06% 00:27:57.795 cpu : usr=93.60%, sys=6.06%, ctx=7, majf=0, minf=50 00:27:57.795 IO depths : 1=0.1%, 2=0.6%, 4=66.2%, 8=33.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.795 complete : 0=0.0%, 4=96.5%, 8=3.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.795 issued rwts: total=13322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.795 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:57.795 filename1: (groupid=0, jobs=1): err= 0: pid=3027833: Wed Apr 24 21:43:20 2024 00:27:57.795 read: IOPS=2688, BW=21.0MiB/s (22.0MB/s)(105MiB/5002msec) 00:27:57.795 slat (usec): min=5, max=102, avg= 8.21, stdev= 2.81 00:27:57.795 clat (usec): min=1635, max=5212, avg=2955.49, stdev=395.85 00:27:57.795 lat (usec): min=1641, max=5237, avg=2963.71, stdev=395.87 00:27:57.795 clat percentiles (usec): 00:27:57.795 | 1.00th=[ 2073], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2638], 00:27:57.795 | 30.00th=[ 2802], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:27:57.795 | 70.00th=[ 3130], 80.00th=[ 3228], 90.00th=[ 3458], 95.00th=[ 3621], 00:27:57.795 | 99.00th=[ 4015], 99.50th=[ 4228], 99.90th=[ 4621], 99.95th=[ 4948], 00:27:57.795 | 99.99th=[ 5145] 00:27:57.795 bw ( KiB/s): min=21216, max=21680, per=25.09%, avg=21500.80, stdev=154.91, samples=10 00:27:57.795 iops : min= 2652, max= 2710, avg=2687.60, stdev=19.36, samples=10 00:27:57.795 lat (msec) : 2=0.46%, 4=98.45%, 10=1.09% 00:27:57.795 cpu : usr=93.86%, sys=5.78%, ctx=9, majf=0, minf=44 00:27:57.795 IO depths : 1=0.1%, 2=0.7%, 4=66.2%, 8=33.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.795 complete : 0=0.0%, 4=96.4%, 8=3.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.795 issued rwts: total=13446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.795 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:57.795 filename1: (groupid=0, jobs=1): err= 0: pid=3027834: Wed Apr 24 21:43:20 2024 00:27:57.795 read: IOPS=2659, BW=20.8MiB/s (21.8MB/s)(104MiB/5002msec) 00:27:57.795 slat (nsec): min=5684, max=76195, avg=8189.23, stdev=2790.68 00:27:57.795 clat (usec): min=1713, max=6020, avg=2988.19, stdev=397.09 00:27:57.795 lat (usec): min=1721, max=6026, avg=2996.37, stdev=397.12 00:27:57.795 clat percentiles (usec): 00:27:57.795 | 1.00th=[ 2114], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2704], 00:27:57.795 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3032], 00:27:57.795 | 70.00th=[ 3163], 80.00th=[ 3261], 90.00th=[ 3458], 95.00th=[ 3654], 00:27:57.795 | 99.00th=[ 4080], 99.50th=[ 4228], 99.90th=[ 4948], 99.95th=[ 5080], 00:27:57.795 | 99.99th=[ 5342] 00:27:57.795 bw ( KiB/s): min=21072, max=21552, per=24.82%, avg=21273.60, stdev=156.98, samples=10 00:27:57.795 iops : min= 2634, max= 2694, avg=2659.20, stdev=19.62, samples=10 00:27:57.795 lat (msec) : 2=0.44%, 4=98.28%, 10=1.29% 00:27:57.795 cpu : usr=93.70%, sys=5.94%, ctx=8, majf=0, minf=51 00:27:57.795 IO depths : 1=0.1%, 2=1.1%, 4=65.5%, 8=33.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.795 complete : 0=0.0%, 4=96.6%, 8=3.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.795 issued rwts: total=13301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.795 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:57.795 00:27:57.795 Run status group 0 (all jobs): 00:27:57.795 READ: bw=83.7MiB/s (87.8MB/s), 20.8MiB/s-21.1MiB/s (21.8MB/s-22.1MB/s), io=419MiB (439MB), run=5002-5003msec 00:27:58.053 21:43:20 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:58.053 21:43:20 -- target/dif.sh@43 -- # local sub 00:27:58.053 21:43:20 -- target/dif.sh@45 -- # for sub in "$@" 00:27:58.053 21:43:20 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:58.053 21:43:20 -- target/dif.sh@36 -- # local sub_id=0 00:27:58.053 21:43:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:58.053 21:43:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.053 21:43:20 -- common/autotest_common.sh@10 -- # set +x 00:27:58.053 21:43:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.053 21:43:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:58.053 21:43:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.053 21:43:20 -- common/autotest_common.sh@10 -- # set +x 00:27:58.053 21:43:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.053 21:43:20 -- target/dif.sh@45 -- # for sub in "$@" 00:27:58.053 21:43:20 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:58.053 21:43:20 -- target/dif.sh@36 -- # local sub_id=1 00:27:58.053 21:43:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:58.053 21:43:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.053 21:43:20 -- common/autotest_common.sh@10 -- # set +x 00:27:58.053 21:43:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.053 21:43:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:58.053 21:43:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.053 21:43:20 -- common/autotest_common.sh@10 -- # set +x 00:27:58.053 21:43:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.053 00:27:58.053 real 0m24.320s 00:27:58.053 user 4m53.008s 00:27:58.053 sys 0m9.495s 00:27:58.053 21:43:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:58.053 21:43:20 -- common/autotest_common.sh@10 -- # set +x 00:27:58.053 ************************************ 00:27:58.053 END TEST fio_dif_rand_params 00:27:58.053 ************************************ 00:27:58.053 21:43:20 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:58.053 21:43:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:58.053 21:43:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:58.053 21:43:20 -- common/autotest_common.sh@10 -- # set +x 00:27:58.053 ************************************ 00:27:58.053 START TEST fio_dif_digest 00:27:58.053 ************************************ 00:27:58.053 21:43:20 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:27:58.053 21:43:20 -- target/dif.sh@123 -- # local NULL_DIF 00:27:58.053 21:43:20 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:58.053 21:43:20 -- target/dif.sh@125 -- # local hdgst ddgst 00:27:58.053 21:43:20 -- target/dif.sh@127 -- # NULL_DIF=3 00:27:58.053 21:43:20 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:58.053 21:43:20 -- target/dif.sh@127 -- # numjobs=3 00:27:58.053 21:43:20 -- target/dif.sh@127 -- # iodepth=3 00:27:58.053 21:43:20 -- target/dif.sh@127 -- # runtime=10 00:27:58.053 21:43:20 -- target/dif.sh@128 -- # hdgst=true 00:27:58.053 21:43:20 -- target/dif.sh@128 -- # ddgst=true 00:27:58.053 21:43:20 -- target/dif.sh@130 -- # create_subsystems 0 00:27:58.053 21:43:20 -- target/dif.sh@28 -- # local sub 00:27:58.053 21:43:20 -- target/dif.sh@30 -- # for sub in "$@" 00:27:58.053 21:43:20 -- target/dif.sh@31 -- # create_subsystem 0 00:27:58.053 21:43:20 -- target/dif.sh@18 -- # local sub_id=0 00:27:58.053 21:43:20 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:58.053 21:43:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.053 21:43:20 -- common/autotest_common.sh@10 -- # set +x 00:27:58.311 bdev_null0 00:27:58.311 21:43:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.311 21:43:20 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:58.311 21:43:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.311 21:43:20 -- common/autotest_common.sh@10 -- # set +x 00:27:58.311 21:43:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.311 21:43:20 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:58.311 21:43:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.311 21:43:20 -- common/autotest_common.sh@10 -- # set +x 00:27:58.311 21:43:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.311 21:43:20 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:58.311 21:43:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.311 21:43:20 -- common/autotest_common.sh@10 -- # set +x 00:27:58.311 [2024-04-24 21:43:20.966630] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.311 21:43:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.311 21:43:20 -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:58.311 21:43:20 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:58.311 21:43:20 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:58.311 21:43:20 -- nvmf/common.sh@521 -- # config=() 00:27:58.311 21:43:20 -- nvmf/common.sh@521 -- # local subsystem config 00:27:58.311 21:43:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:58.311 21:43:20 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:58.311 21:43:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:58.311 { 00:27:58.311 "params": { 00:27:58.311 "name": "Nvme$subsystem", 00:27:58.311 "trtype": "$TEST_TRANSPORT", 00:27:58.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.311 "adrfam": "ipv4", 00:27:58.311 "trsvcid": "$NVMF_PORT", 00:27:58.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.311 "hdgst": ${hdgst:-false}, 00:27:58.311 "ddgst": ${ddgst:-false} 00:27:58.311 }, 00:27:58.311 "method": "bdev_nvme_attach_controller" 00:27:58.311 } 00:27:58.311 EOF 00:27:58.311 )") 00:27:58.311 21:43:20 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:58.311 21:43:20 -- target/dif.sh@82 -- # gen_fio_conf 00:27:58.311 21:43:20 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:58.311 21:43:20 -- target/dif.sh@54 -- # local file 00:27:58.311 21:43:20 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:58.311 21:43:20 -- target/dif.sh@56 -- # cat 00:27:58.311 21:43:20 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:58.311 21:43:20 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:58.311 21:43:20 -- common/autotest_common.sh@1327 -- # shift 00:27:58.311 21:43:20 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:58.311 21:43:20 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:58.311 21:43:20 -- nvmf/common.sh@543 -- # cat 00:27:58.311 21:43:20 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:58.311 21:43:20 -- target/dif.sh@72 -- # (( file <= files )) 00:27:58.311 21:43:20 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:58.311 21:43:20 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:58.311 21:43:20 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:58.311 21:43:20 -- nvmf/common.sh@545 -- # jq . 00:27:58.311 21:43:20 -- nvmf/common.sh@546 -- # IFS=, 00:27:58.311 21:43:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:58.311 "params": { 00:27:58.311 "name": "Nvme0", 00:27:58.311 "trtype": "tcp", 00:27:58.311 "traddr": "10.0.0.2", 00:27:58.311 "adrfam": "ipv4", 00:27:58.311 "trsvcid": "4420", 00:27:58.311 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:58.311 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:58.311 "hdgst": true, 00:27:58.311 "ddgst": true 00:27:58.311 }, 00:27:58.311 "method": "bdev_nvme_attach_controller" 00:27:58.311 }' 00:27:58.311 21:43:21 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:58.311 21:43:21 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:58.311 21:43:21 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:58.311 21:43:21 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:58.311 21:43:21 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:58.311 21:43:21 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:58.311 21:43:21 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:58.311 21:43:21 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:58.311 21:43:21 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:58.311 21:43:21 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:58.566 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:58.566 ... 00:27:58.567 fio-3.35 00:27:58.567 Starting 3 threads 00:27:58.567 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.897 00:28:10.897 filename0: (groupid=0, jobs=1): err= 0: pid=3029051: Wed Apr 24 21:43:31 2024 00:28:10.897 read: IOPS=288, BW=36.1MiB/s (37.9MB/s)(363MiB/10048msec) 00:28:10.897 slat (nsec): min=3286, max=60208, avg=10814.95, stdev=2091.39 00:28:10.898 clat (usec): min=6098, max=56992, avg=10356.07, stdev=4053.46 00:28:10.898 lat (usec): min=6105, max=57004, avg=10366.89, stdev=4053.54 00:28:10.898 clat percentiles (usec): 00:28:10.898 | 1.00th=[ 6915], 5.00th=[ 7439], 10.00th=[ 7832], 20.00th=[ 8586], 00:28:10.898 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10552], 00:28:10.898 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11731], 95.00th=[12518], 00:28:10.898 | 99.00th=[15401], 99.50th=[51119], 99.90th=[56361], 99.95th=[56886], 00:28:10.898 | 99.99th=[56886] 00:28:10.898 bw ( KiB/s): min=29952, max=40960, per=37.01%, avg=37132.80, stdev=2906.38, samples=20 00:28:10.898 iops : min= 234, max= 320, avg=290.10, stdev=22.71, samples=20 00:28:10.898 lat (msec) : 10=44.09%, 20=55.12%, 50=0.10%, 100=0.69% 00:28:10.898 cpu : usr=91.10%, sys=8.54%, ctx=13, majf=0, minf=169 00:28:10.898 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:10.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.898 issued rwts: total=2903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:10.898 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:10.898 filename0: (groupid=0, jobs=1): err= 0: pid=3029052: Wed Apr 24 21:43:31 2024 00:28:10.898 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(266MiB/10005msec) 00:28:10.898 slat (usec): min=6, max=103, avg=10.83, stdev= 2.79 00:28:10.898 clat (usec): min=5931, max=94862, avg=14111.02, stdev=10696.49 00:28:10.898 lat (usec): min=5941, max=94874, avg=14121.85, stdev=10696.50 00:28:10.898 clat percentiles (usec): 00:28:10.898 | 1.00th=[ 7439], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10421], 00:28:10.898 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11731], 00:28:10.898 | 70.00th=[12125], 80.00th=[12911], 90.00th=[14484], 95.00th=[52167], 00:28:10.898 | 99.00th=[56361], 99.50th=[57934], 99.90th=[64226], 99.95th=[93848], 00:28:10.898 | 99.99th=[94897] 00:28:10.898 bw ( KiB/s): min=18944, max=36352, per=27.21%, avg=27297.68, stdev=3469.63, samples=19 00:28:10.898 iops : min= 148, max= 284, avg=213.26, stdev=27.11, samples=19 00:28:10.898 lat (msec) : 10=11.72%, 20=82.02%, 100=6.26% 00:28:10.898 cpu : usr=91.72%, sys=7.91%, ctx=17, majf=0, minf=128 00:28:10.898 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:10.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.898 issued rwts: total=2125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:10.898 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:10.898 filename0: (groupid=0, jobs=1): err= 0: pid=3029053: Wed Apr 24 21:43:31 2024 00:28:10.898 read: IOPS=283, BW=35.4MiB/s (37.2MB/s)(356MiB/10046msec) 00:28:10.898 slat (nsec): min=6001, max=23730, avg=10637.03, stdev=1990.43 00:28:10.898 clat (usec): min=5081, max=58415, avg=10554.31, stdev=4056.36 00:28:10.898 lat (usec): min=5088, max=58428, avg=10564.95, stdev=4056.49 00:28:10.898 clat percentiles (usec): 00:28:10.898 | 1.00th=[ 5735], 5.00th=[ 7373], 10.00th=[ 7767], 20.00th=[ 8717], 00:28:10.898 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10552], 60.00th=[10814], 00:28:10.898 | 70.00th=[11207], 80.00th=[11600], 90.00th=[12125], 95.00th=[12780], 00:28:10.898 | 99.00th=[15795], 99.50th=[52691], 99.90th=[57934], 99.95th=[57934], 00:28:10.898 | 99.99th=[58459] 00:28:10.898 bw ( KiB/s): min=26112, max=41728, per=36.31%, avg=36428.80, stdev=3983.38, samples=20 00:28:10.898 iops : min= 204, max= 326, avg=284.60, stdev=31.12, samples=20 00:28:10.898 lat (msec) : 10=36.52%, 20=62.68%, 50=0.18%, 100=0.63% 00:28:10.898 cpu : usr=91.03%, sys=8.60%, ctx=16, majf=0, minf=126 00:28:10.898 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:10.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:10.898 issued rwts: total=2848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:10.898 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:10.898 00:28:10.898 Run status group 0 (all jobs): 00:28:10.898 READ: bw=98.0MiB/s (103MB/s), 26.5MiB/s-36.1MiB/s (27.8MB/s-37.9MB/s), io=985MiB (1032MB), run=10005-10048msec 00:28:10.898 21:43:32 -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:10.898 21:43:32 -- target/dif.sh@43 -- # local sub 00:28:10.898 21:43:32 -- target/dif.sh@45 -- # for sub in "$@" 00:28:10.898 21:43:32 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:10.898 21:43:32 -- target/dif.sh@36 -- # local sub_id=0 00:28:10.898 21:43:32 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:10.898 21:43:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.898 21:43:32 -- common/autotest_common.sh@10 -- # set +x 00:28:10.898 21:43:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.898 21:43:32 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:10.898 21:43:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.898 21:43:32 -- common/autotest_common.sh@10 -- # set +x 00:28:10.898 21:43:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.898 00:28:10.898 real 0m11.203s 00:28:10.898 user 0m36.730s 00:28:10.898 sys 0m2.927s 00:28:10.898 21:43:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:10.898 21:43:32 -- common/autotest_common.sh@10 -- # set +x 00:28:10.898 ************************************ 00:28:10.898 END TEST fio_dif_digest 00:28:10.898 ************************************ 00:28:10.898 21:43:32 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:10.898 21:43:32 -- target/dif.sh@147 -- # nvmftestfini 00:28:10.898 21:43:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:10.898 21:43:32 -- nvmf/common.sh@117 -- # sync 00:28:10.898 21:43:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:10.898 21:43:32 -- nvmf/common.sh@120 -- # set +e 00:28:10.898 21:43:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:10.898 21:43:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:10.898 rmmod nvme_tcp 00:28:10.898 rmmod nvme_fabrics 00:28:10.898 rmmod nvme_keyring 00:28:10.898 21:43:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:10.898 21:43:32 -- nvmf/common.sh@124 -- # set -e 00:28:10.898 21:43:32 -- nvmf/common.sh@125 -- # return 0 00:28:10.898 21:43:32 -- nvmf/common.sh@478 -- # '[' -n 3019887 ']' 00:28:10.898 21:43:32 -- nvmf/common.sh@479 -- # killprocess 3019887 00:28:10.898 21:43:32 -- common/autotest_common.sh@936 -- # '[' -z 3019887 ']' 00:28:10.898 21:43:32 -- common/autotest_common.sh@940 -- # kill -0 3019887 00:28:10.898 21:43:32 -- common/autotest_common.sh@941 -- # uname 00:28:10.898 21:43:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:10.898 21:43:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3019887 00:28:10.898 21:43:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:10.898 21:43:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:10.898 21:43:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3019887' 00:28:10.898 killing process with pid 3019887 00:28:10.898 21:43:32 -- common/autotest_common.sh@955 -- # kill 3019887 00:28:10.898 21:43:32 -- common/autotest_common.sh@960 -- # wait 3019887 00:28:10.898 21:43:32 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:28:10.898 21:43:32 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:12.271 Waiting for block devices as requested 00:28:12.271 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:12.528 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:12.528 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:12.528 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:12.528 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:12.785 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:12.785 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:12.785 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:13.044 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:13.044 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:13.044 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:13.305 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:13.305 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:13.305 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:13.592 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:13.592 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:13.592 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:28:13.849 21:43:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:13.849 21:43:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:13.849 21:43:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:13.849 21:43:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:13.849 21:43:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.849 21:43:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:13.849 21:43:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.758 21:43:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:15.758 00:28:15.758 real 1m16.875s 00:28:15.758 user 7m15.094s 00:28:15.758 sys 0m30.698s 00:28:15.758 21:43:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:15.758 21:43:38 -- common/autotest_common.sh@10 -- # set +x 00:28:15.758 ************************************ 00:28:15.758 END TEST nvmf_dif 00:28:15.758 ************************************ 00:28:15.758 21:43:38 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:15.758 21:43:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:15.758 21:43:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:15.758 21:43:38 -- common/autotest_common.sh@10 -- # set +x 00:28:16.016 ************************************ 00:28:16.016 START TEST nvmf_abort_qd_sizes 00:28:16.016 ************************************ 00:28:16.016 21:43:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:16.274 * Looking for test storage... 00:28:16.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:16.274 21:43:38 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.274 21:43:38 -- nvmf/common.sh@7 -- # uname -s 00:28:16.274 21:43:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.274 21:43:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.274 21:43:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.274 21:43:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.274 21:43:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.274 21:43:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.274 21:43:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.274 21:43:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.274 21:43:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.274 21:43:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.274 21:43:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:16.274 21:43:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:16.274 21:43:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.274 21:43:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.274 21:43:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.274 21:43:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:16.274 21:43:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.274 21:43:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.274 21:43:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.274 21:43:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.274 21:43:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.275 21:43:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.275 21:43:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.275 21:43:38 -- paths/export.sh@5 -- # export PATH 00:28:16.275 21:43:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.275 21:43:38 -- nvmf/common.sh@47 -- # : 0 00:28:16.275 21:43:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:16.275 21:43:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:16.275 21:43:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:16.275 21:43:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.275 21:43:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.275 21:43:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:16.275 21:43:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:16.275 21:43:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:16.275 21:43:38 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:16.275 21:43:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:16.275 21:43:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.275 21:43:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:16.275 21:43:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:16.275 21:43:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:16.275 21:43:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.275 21:43:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:16.275 21:43:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.275 21:43:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:16.275 21:43:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:16.275 21:43:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:16.275 21:43:38 -- common/autotest_common.sh@10 -- # set +x 00:28:22.834 21:43:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:22.835 21:43:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:22.835 21:43:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:22.835 21:43:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:22.835 21:43:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:22.835 21:43:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:22.835 21:43:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:22.835 21:43:45 -- nvmf/common.sh@295 -- # net_devs=() 00:28:22.835 21:43:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:22.835 21:43:45 -- nvmf/common.sh@296 -- # e810=() 00:28:22.835 21:43:45 -- nvmf/common.sh@296 -- # local -ga e810 00:28:22.835 21:43:45 -- nvmf/common.sh@297 -- # x722=() 00:28:22.835 21:43:45 -- nvmf/common.sh@297 -- # local -ga x722 00:28:22.835 21:43:45 -- nvmf/common.sh@298 -- # mlx=() 00:28:22.835 21:43:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:22.835 21:43:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.835 21:43:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.835 21:43:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.835 21:43:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.835 21:43:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.835 21:43:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.835 21:43:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.835 21:43:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.835 21:43:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.835 21:43:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.835 21:43:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.835 21:43:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:22.835 21:43:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:22.835 21:43:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:22.835 21:43:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:22.835 21:43:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:22.835 21:43:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:22.835 21:43:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.835 21:43:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:22.835 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:22.835 21:43:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.835 21:43:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.835 21:43:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.835 21:43:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.835 21:43:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.835 21:43:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.835 21:43:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:22.835 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:22.835 21:43:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.835 21:43:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.835 21:43:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.835 21:43:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.835 21:43:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.835 21:43:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:22.835 21:43:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:22.835 21:43:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:22.835 21:43:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.835 21:43:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.835 21:43:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:22.835 21:43:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.835 21:43:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:22.835 Found net devices under 0000:af:00.0: cvl_0_0 00:28:22.835 21:43:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.835 21:43:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.835 21:43:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.835 21:43:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:22.835 21:43:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.835 21:43:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:22.835 Found net devices under 0000:af:00.1: cvl_0_1 00:28:22.835 21:43:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.835 21:43:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:22.835 21:43:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:22.835 21:43:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:22.835 21:43:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:22.835 21:43:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:22.835 21:43:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.835 21:43:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.835 21:43:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.835 21:43:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:22.835 21:43:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.835 21:43:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.835 21:43:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:22.835 21:43:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.835 21:43:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.835 21:43:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:22.835 21:43:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:22.835 21:43:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.835 21:43:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.835 21:43:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.835 21:43:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.835 21:43:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:22.835 21:43:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.835 21:43:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.835 21:43:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.835 21:43:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:22.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:28:22.835 00:28:22.835 --- 10.0.0.2 ping statistics --- 00:28:22.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.835 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:28:23.094 21:43:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:23.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:23.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:28:23.094 00:28:23.094 --- 10.0.0.1 ping statistics --- 00:28:23.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:23.094 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:28:23.094 21:43:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:23.094 21:43:45 -- nvmf/common.sh@411 -- # return 0 00:28:23.094 21:43:45 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:28:23.094 21:43:45 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:25.634 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:25.634 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:25.634 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:25.634 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:25.634 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:25.892 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:25.892 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:25.892 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:25.892 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:25.892 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:25.892 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:25.892 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:25.892 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:25.892 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:25.892 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:25.892 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:27.795 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:28:27.795 21:43:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.795 21:43:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:27.795 21:43:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:27.795 21:43:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.795 21:43:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:27.795 21:43:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:27.795 21:43:50 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:27.795 21:43:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:27.795 21:43:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:27.795 21:43:50 -- common/autotest_common.sh@10 -- # set +x 00:28:27.795 21:43:50 -- nvmf/common.sh@470 -- # nvmfpid=3037350 00:28:27.795 21:43:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:27.795 21:43:50 -- nvmf/common.sh@471 -- # waitforlisten 3037350 00:28:27.795 21:43:50 -- common/autotest_common.sh@817 -- # '[' -z 3037350 ']' 00:28:27.795 21:43:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.795 21:43:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:27.795 21:43:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.795 21:43:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:27.795 21:43:50 -- common/autotest_common.sh@10 -- # set +x 00:28:27.795 [2024-04-24 21:43:50.424199] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:28:27.795 [2024-04-24 21:43:50.424248] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.795 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.795 [2024-04-24 21:43:50.500310] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:27.795 [2024-04-24 21:43:50.574513] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.795 [2024-04-24 21:43:50.574553] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.795 [2024-04-24 21:43:50.574563] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.795 [2024-04-24 21:43:50.574571] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.795 [2024-04-24 21:43:50.574578] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.795 [2024-04-24 21:43:50.574625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.795 [2024-04-24 21:43:50.574720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:27.795 [2024-04-24 21:43:50.574786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:27.795 [2024-04-24 21:43:50.574787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.361 21:43:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:28.361 21:43:51 -- common/autotest_common.sh@850 -- # return 0 00:28:28.361 21:43:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:28.361 21:43:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:28.361 21:43:51 -- common/autotest_common.sh@10 -- # set +x 00:28:28.619 21:43:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.619 21:43:51 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:28.619 21:43:51 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:28.619 21:43:51 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:28.619 21:43:51 -- scripts/common.sh@309 -- # local bdf bdfs 00:28:28.619 21:43:51 -- scripts/common.sh@310 -- # local nvmes 00:28:28.619 21:43:51 -- scripts/common.sh@312 -- # [[ -n 0000:d8:00.0 ]] 00:28:28.619 21:43:51 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:28.619 21:43:51 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:28.619 21:43:51 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:28:28.619 21:43:51 -- scripts/common.sh@320 -- # uname -s 00:28:28.619 21:43:51 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:28.619 21:43:51 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:28.619 21:43:51 -- scripts/common.sh@325 -- # (( 1 )) 00:28:28.619 21:43:51 -- scripts/common.sh@326 -- # printf '%s\n' 0000:d8:00.0 00:28:28.619 21:43:51 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:28.619 21:43:51 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:d8:00.0 00:28:28.619 21:43:51 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:28.619 21:43:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:28.619 21:43:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:28.619 21:43:51 -- common/autotest_common.sh@10 -- # set +x 00:28:28.619 ************************************ 00:28:28.619 START TEST spdk_target_abort 00:28:28.619 ************************************ 00:28:28.619 21:43:51 -- common/autotest_common.sh@1111 -- # spdk_target 00:28:28.619 21:43:51 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:28.619 21:43:51 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:28:28.619 21:43:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:28.619 21:43:51 -- common/autotest_common.sh@10 -- # set +x 00:28:31.931 spdk_targetn1 00:28:31.931 21:43:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:31.931 21:43:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:31.931 21:43:54 -- common/autotest_common.sh@10 -- # set +x 00:28:31.931 [2024-04-24 21:43:54.289119] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:31.931 21:43:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:31.931 21:43:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:31.931 21:43:54 -- common/autotest_common.sh@10 -- # set +x 00:28:31.931 21:43:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:31.931 21:43:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:31.931 21:43:54 -- common/autotest_common.sh@10 -- # set +x 00:28:31.931 21:43:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:31.931 21:43:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:31.931 21:43:54 -- common/autotest_common.sh@10 -- # set +x 00:28:31.931 [2024-04-24 21:43:54.325361] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.931 21:43:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:31.931 21:43:54 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:31.931 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.213 Initializing NVMe Controllers 00:28:35.213 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:35.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:35.213 Initialization complete. Launching workers. 00:28:35.213 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 4241, failed: 0 00:28:35.213 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1492, failed to submit 2749 00:28:35.213 success 745, unsuccess 747, failed 0 00:28:35.213 21:43:57 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:35.213 21:43:57 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:35.213 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.493 Initializing NVMe Controllers 00:28:38.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:38.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:38.493 Initialization complete. Launching workers. 00:28:38.493 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8538, failed: 0 00:28:38.493 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1264, failed to submit 7274 00:28:38.493 success 318, unsuccess 946, failed 0 00:28:38.493 21:44:00 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:38.493 21:44:00 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:38.493 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.768 Initializing NVMe Controllers 00:28:41.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:41.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:41.768 Initialization complete. Launching workers. 00:28:41.768 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34668, failed: 0 00:28:41.768 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2783, failed to submit 31885 00:28:41.768 success 690, unsuccess 2093, failed 0 00:28:41.768 21:44:03 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:41.768 21:44:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:41.768 21:44:04 -- common/autotest_common.sh@10 -- # set +x 00:28:41.768 21:44:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:41.768 21:44:04 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:41.768 21:44:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:41.768 21:44:04 -- common/autotest_common.sh@10 -- # set +x 00:28:43.141 21:44:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:43.141 21:44:05 -- target/abort_qd_sizes.sh@61 -- # killprocess 3037350 00:28:43.141 21:44:05 -- common/autotest_common.sh@936 -- # '[' -z 3037350 ']' 00:28:43.141 21:44:05 -- common/autotest_common.sh@940 -- # kill -0 3037350 00:28:43.141 21:44:05 -- common/autotest_common.sh@941 -- # uname 00:28:43.141 21:44:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:43.141 21:44:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3037350 00:28:43.141 21:44:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:43.141 21:44:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:43.141 21:44:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3037350' 00:28:43.141 killing process with pid 3037350 00:28:43.141 21:44:05 -- common/autotest_common.sh@955 -- # kill 3037350 00:28:43.141 21:44:05 -- common/autotest_common.sh@960 -- # wait 3037350 00:28:43.399 00:28:43.399 real 0m14.742s 00:28:43.399 user 0m58.667s 00:28:43.399 sys 0m2.776s 00:28:43.399 21:44:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:43.399 21:44:06 -- common/autotest_common.sh@10 -- # set +x 00:28:43.399 ************************************ 00:28:43.399 END TEST spdk_target_abort 00:28:43.399 ************************************ 00:28:43.399 21:44:06 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:43.399 21:44:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:43.399 21:44:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:43.399 21:44:06 -- common/autotest_common.sh@10 -- # set +x 00:28:43.657 ************************************ 00:28:43.657 START TEST kernel_target_abort 00:28:43.657 ************************************ 00:28:43.657 21:44:06 -- common/autotest_common.sh@1111 -- # kernel_target 00:28:43.657 21:44:06 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:43.657 21:44:06 -- nvmf/common.sh@717 -- # local ip 00:28:43.657 21:44:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:43.657 21:44:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:43.657 21:44:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.657 21:44:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.657 21:44:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:43.657 21:44:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.657 21:44:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:43.657 21:44:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:43.657 21:44:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:43.657 21:44:06 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:43.657 21:44:06 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:43.657 21:44:06 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:28:43.657 21:44:06 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:43.657 21:44:06 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:43.657 21:44:06 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:43.657 21:44:06 -- nvmf/common.sh@628 -- # local block nvme 00:28:43.657 21:44:06 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:28:43.657 21:44:06 -- nvmf/common.sh@631 -- # modprobe nvmet 00:28:43.657 21:44:06 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:43.657 21:44:06 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:46.949 Waiting for block devices as requested 00:28:46.949 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:46.949 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:46.949 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:46.949 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:46.949 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:46.949 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:46.949 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:46.949 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:46.949 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:46.949 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:47.208 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:47.208 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:47.208 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:47.466 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:47.466 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:47.466 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:47.725 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:28:47.725 21:44:10 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:28:47.725 21:44:10 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:47.725 21:44:10 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:28:47.725 21:44:10 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:28:47.725 21:44:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:47.725 21:44:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:47.725 21:44:10 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:28:47.725 21:44:10 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:47.725 21:44:10 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:47.725 No valid GPT data, bailing 00:28:47.725 21:44:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:47.983 21:44:10 -- scripts/common.sh@391 -- # pt= 00:28:47.983 21:44:10 -- scripts/common.sh@392 -- # return 1 00:28:47.983 21:44:10 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:28:47.983 21:44:10 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:28:47.983 21:44:10 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:47.983 21:44:10 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:47.983 21:44:10 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:47.983 21:44:10 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:47.983 21:44:10 -- nvmf/common.sh@656 -- # echo 1 00:28:47.983 21:44:10 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:28:47.983 21:44:10 -- nvmf/common.sh@658 -- # echo 1 00:28:47.983 21:44:10 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:28:47.983 21:44:10 -- nvmf/common.sh@661 -- # echo tcp 00:28:47.983 21:44:10 -- nvmf/common.sh@662 -- # echo 4420 00:28:47.983 21:44:10 -- nvmf/common.sh@663 -- # echo ipv4 00:28:47.984 21:44:10 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:47.984 21:44:10 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:28:47.984 00:28:47.984 Discovery Log Number of Records 2, Generation counter 2 00:28:47.984 =====Discovery Log Entry 0====== 00:28:47.984 trtype: tcp 00:28:47.984 adrfam: ipv4 00:28:47.984 subtype: current discovery subsystem 00:28:47.984 treq: not specified, sq flow control disable supported 00:28:47.984 portid: 1 00:28:47.984 trsvcid: 4420 00:28:47.984 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:47.984 traddr: 10.0.0.1 00:28:47.984 eflags: none 00:28:47.984 sectype: none 00:28:47.984 =====Discovery Log Entry 1====== 00:28:47.984 trtype: tcp 00:28:47.984 adrfam: ipv4 00:28:47.984 subtype: nvme subsystem 00:28:47.984 treq: not specified, sq flow control disable supported 00:28:47.984 portid: 1 00:28:47.984 trsvcid: 4420 00:28:47.984 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:47.984 traddr: 10.0.0.1 00:28:47.984 eflags: none 00:28:47.984 sectype: none 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:47.984 21:44:10 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:47.984 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.264 Initializing NVMe Controllers 00:28:51.264 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:51.264 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:51.264 Initialization complete. Launching workers. 00:28:51.264 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 54088, failed: 0 00:28:51.264 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 54088, failed to submit 0 00:28:51.264 success 0, unsuccess 54088, failed 0 00:28:51.264 21:44:13 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:51.264 21:44:13 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:51.264 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.545 Initializing NVMe Controllers 00:28:54.545 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:54.545 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:54.545 Initialization complete. Launching workers. 00:28:54.545 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 103452, failed: 0 00:28:54.545 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26146, failed to submit 77306 00:28:54.545 success 0, unsuccess 26146, failed 0 00:28:54.545 21:44:16 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:54.545 21:44:16 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:54.545 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.825 Initializing NVMe Controllers 00:28:57.825 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:57.825 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:57.825 Initialization complete. Launching workers. 00:28:57.825 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100385, failed: 0 00:28:57.825 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25106, failed to submit 75279 00:28:57.825 success 0, unsuccess 25106, failed 0 00:28:57.825 21:44:20 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:57.825 21:44:20 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:57.825 21:44:20 -- nvmf/common.sh@675 -- # echo 0 00:28:57.825 21:44:20 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:57.825 21:44:20 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:57.825 21:44:20 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:57.825 21:44:20 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:57.825 21:44:20 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:28:57.825 21:44:20 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:28:57.825 21:44:20 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:00.355 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:00.355 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:00.355 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:00.355 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:00.620 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:00.620 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:00.620 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:00.620 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:00.620 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:00.620 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:00.620 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:00.620 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:00.620 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:00.620 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:00.620 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:00.620 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:02.524 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:29:02.524 00:29:02.524 real 0m18.693s 00:29:02.524 user 0m6.340s 00:29:02.524 sys 0m6.115s 00:29:02.524 21:44:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:02.524 21:44:25 -- common/autotest_common.sh@10 -- # set +x 00:29:02.524 ************************************ 00:29:02.524 END TEST kernel_target_abort 00:29:02.524 ************************************ 00:29:02.524 21:44:25 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:02.524 21:44:25 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:02.524 21:44:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:02.524 21:44:25 -- nvmf/common.sh@117 -- # sync 00:29:02.524 21:44:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:02.524 21:44:25 -- nvmf/common.sh@120 -- # set +e 00:29:02.524 21:44:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:02.524 21:44:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:02.524 rmmod nvme_tcp 00:29:02.524 rmmod nvme_fabrics 00:29:02.524 rmmod nvme_keyring 00:29:02.524 21:44:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:02.524 21:44:25 -- nvmf/common.sh@124 -- # set -e 00:29:02.524 21:44:25 -- nvmf/common.sh@125 -- # return 0 00:29:02.524 21:44:25 -- nvmf/common.sh@478 -- # '[' -n 3037350 ']' 00:29:02.524 21:44:25 -- nvmf/common.sh@479 -- # killprocess 3037350 00:29:02.524 21:44:25 -- common/autotest_common.sh@936 -- # '[' -z 3037350 ']' 00:29:02.524 21:44:25 -- common/autotest_common.sh@940 -- # kill -0 3037350 00:29:02.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3037350) - No such process 00:29:02.524 21:44:25 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3037350 is not found' 00:29:02.524 Process with pid 3037350 is not found 00:29:02.524 21:44:25 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:29:02.524 21:44:25 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:05.804 Waiting for block devices as requested 00:29:05.804 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:05.804 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:05.804 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:05.804 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:05.804 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:06.062 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:06.062 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:06.062 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:06.062 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:06.321 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:06.321 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:06.321 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:06.580 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:06.580 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:06.580 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:06.837 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:06.837 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:29:07.094 21:44:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:07.094 21:44:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:07.094 21:44:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:07.094 21:44:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:07.094 21:44:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.094 21:44:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:07.094 21:44:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.992 21:44:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:08.992 00:29:08.992 real 0m53.032s 00:29:08.992 user 1m9.520s 00:29:08.992 sys 0m18.905s 00:29:08.992 21:44:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:08.992 21:44:31 -- common/autotest_common.sh@10 -- # set +x 00:29:08.992 ************************************ 00:29:08.992 END TEST nvmf_abort_qd_sizes 00:29:08.992 ************************************ 00:29:09.250 21:44:31 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:09.250 21:44:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:09.250 21:44:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:09.250 21:44:31 -- common/autotest_common.sh@10 -- # set +x 00:29:09.250 ************************************ 00:29:09.250 START TEST keyring_file 00:29:09.250 ************************************ 00:29:09.250 21:44:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:09.507 * Looking for test storage... 00:29:09.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:09.507 21:44:32 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:09.507 21:44:32 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:09.507 21:44:32 -- nvmf/common.sh@7 -- # uname -s 00:29:09.507 21:44:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.507 21:44:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.507 21:44:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.507 21:44:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.507 21:44:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.507 21:44:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.507 21:44:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.507 21:44:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.507 21:44:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.507 21:44:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.507 21:44:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:09.507 21:44:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:09.507 21:44:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.507 21:44:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.507 21:44:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:09.507 21:44:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.507 21:44:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:09.507 21:44:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.507 21:44:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.507 21:44:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.507 21:44:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.507 21:44:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.507 21:44:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.507 21:44:32 -- paths/export.sh@5 -- # export PATH 00:29:09.507 21:44:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.507 21:44:32 -- nvmf/common.sh@47 -- # : 0 00:29:09.507 21:44:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:09.507 21:44:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:09.507 21:44:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.507 21:44:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.507 21:44:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.507 21:44:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:09.507 21:44:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:09.507 21:44:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:09.507 21:44:32 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:09.507 21:44:32 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:09.507 21:44:32 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:09.507 21:44:32 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:09.507 21:44:32 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:09.507 21:44:32 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:09.508 21:44:32 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:09.508 21:44:32 -- keyring/common.sh@15 -- # local name key digest path 00:29:09.508 21:44:32 -- keyring/common.sh@17 -- # name=key0 00:29:09.508 21:44:32 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:09.508 21:44:32 -- keyring/common.sh@17 -- # digest=0 00:29:09.508 21:44:32 -- keyring/common.sh@18 -- # mktemp 00:29:09.508 21:44:32 -- keyring/common.sh@18 -- # path=/tmp/tmp.WbklrMbw4P 00:29:09.508 21:44:32 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:09.508 21:44:32 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:09.508 21:44:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:09.508 21:44:32 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:09.508 21:44:32 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:09.508 21:44:32 -- nvmf/common.sh@693 -- # digest=0 00:29:09.508 21:44:32 -- nvmf/common.sh@694 -- # python - 00:29:09.508 21:44:32 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WbklrMbw4P 00:29:09.508 21:44:32 -- keyring/common.sh@23 -- # echo /tmp/tmp.WbklrMbw4P 00:29:09.508 21:44:32 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.WbklrMbw4P 00:29:09.508 21:44:32 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:09.508 21:44:32 -- keyring/common.sh@15 -- # local name key digest path 00:29:09.508 21:44:32 -- keyring/common.sh@17 -- # name=key1 00:29:09.508 21:44:32 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:09.508 21:44:32 -- keyring/common.sh@17 -- # digest=0 00:29:09.508 21:44:32 -- keyring/common.sh@18 -- # mktemp 00:29:09.508 21:44:32 -- keyring/common.sh@18 -- # path=/tmp/tmp.qIzFNmEO3w 00:29:09.508 21:44:32 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:09.508 21:44:32 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:09.508 21:44:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:09.508 21:44:32 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:09.508 21:44:32 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:29:09.508 21:44:32 -- nvmf/common.sh@693 -- # digest=0 00:29:09.508 21:44:32 -- nvmf/common.sh@694 -- # python - 00:29:09.508 21:44:32 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qIzFNmEO3w 00:29:09.508 21:44:32 -- keyring/common.sh@23 -- # echo /tmp/tmp.qIzFNmEO3w 00:29:09.508 21:44:32 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.qIzFNmEO3w 00:29:09.508 21:44:32 -- keyring/file.sh@30 -- # tgtpid=3046842 00:29:09.508 21:44:32 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:09.508 21:44:32 -- keyring/file.sh@32 -- # waitforlisten 3046842 00:29:09.508 21:44:32 -- common/autotest_common.sh@817 -- # '[' -z 3046842 ']' 00:29:09.508 21:44:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.508 21:44:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:09.508 21:44:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.508 21:44:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:09.508 21:44:32 -- common/autotest_common.sh@10 -- # set +x 00:29:09.508 [2024-04-24 21:44:32.370954] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:29:09.508 [2024-04-24 21:44:32.371004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3046842 ] 00:29:09.765 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.765 [2024-04-24 21:44:32.438660] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.765 [2024-04-24 21:44:32.510378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.327 21:44:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:10.327 21:44:33 -- common/autotest_common.sh@850 -- # return 0 00:29:10.327 21:44:33 -- keyring/file.sh@33 -- # rpc_cmd 00:29:10.327 21:44:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:10.327 21:44:33 -- common/autotest_common.sh@10 -- # set +x 00:29:10.327 [2024-04-24 21:44:33.169024] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.327 null0 00:29:10.327 [2024-04-24 21:44:33.201083] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:10.327 [2024-04-24 21:44:33.201470] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:10.327 [2024-04-24 21:44:33.209101] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:10.327 21:44:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:10.327 21:44:33 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:10.327 21:44:33 -- common/autotest_common.sh@638 -- # local es=0 00:29:10.584 21:44:33 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:10.584 21:44:33 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:10.584 21:44:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:10.584 21:44:33 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:10.584 21:44:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:10.584 21:44:33 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:10.584 21:44:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:10.584 21:44:33 -- common/autotest_common.sh@10 -- # set +x 00:29:10.584 [2024-04-24 21:44:33.225136] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:29:10.584 { 00:29:10.584 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:10.584 "secure_channel": false, 00:29:10.584 "listen_address": { 00:29:10.584 "trtype": "tcp", 00:29:10.584 "traddr": "127.0.0.1", 00:29:10.584 "trsvcid": "4420" 00:29:10.584 }, 00:29:10.584 "method": "nvmf_subsystem_add_listener", 00:29:10.584 "req_id": 1 00:29:10.584 } 00:29:10.584 Got JSON-RPC error response 00:29:10.584 response: 00:29:10.584 { 00:29:10.584 "code": -32602, 00:29:10.584 "message": "Invalid parameters" 00:29:10.584 } 00:29:10.584 21:44:33 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:10.584 21:44:33 -- common/autotest_common.sh@641 -- # es=1 00:29:10.584 21:44:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:10.584 21:44:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:10.584 21:44:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:10.584 21:44:33 -- keyring/file.sh@46 -- # bperfpid=3046900 00:29:10.584 21:44:33 -- keyring/file.sh@48 -- # waitforlisten 3046900 /var/tmp/bperf.sock 00:29:10.584 21:44:33 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:10.584 21:44:33 -- common/autotest_common.sh@817 -- # '[' -z 3046900 ']' 00:29:10.584 21:44:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:10.584 21:44:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:10.584 21:44:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:10.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:10.584 21:44:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:10.584 21:44:33 -- common/autotest_common.sh@10 -- # set +x 00:29:10.584 [2024-04-24 21:44:33.281404] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:29:10.584 [2024-04-24 21:44:33.281449] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3046900 ] 00:29:10.584 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.584 [2024-04-24 21:44:33.351606] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.584 [2024-04-24 21:44:33.424875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.515 21:44:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:11.515 21:44:34 -- common/autotest_common.sh@850 -- # return 0 00:29:11.515 21:44:34 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WbklrMbw4P 00:29:11.515 21:44:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WbklrMbw4P 00:29:11.515 21:44:34 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.qIzFNmEO3w 00:29:11.515 21:44:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.qIzFNmEO3w 00:29:11.772 21:44:34 -- keyring/file.sh@51 -- # get_key key0 00:29:11.772 21:44:34 -- keyring/file.sh@51 -- # jq -r .path 00:29:11.772 21:44:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:11.772 21:44:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:11.772 21:44:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:11.772 21:44:34 -- keyring/file.sh@51 -- # [[ /tmp/tmp.WbklrMbw4P == \/\t\m\p\/\t\m\p\.\W\b\k\l\r\M\b\w\4\P ]] 00:29:11.772 21:44:34 -- keyring/file.sh@52 -- # get_key key1 00:29:11.772 21:44:34 -- keyring/file.sh@52 -- # jq -r .path 00:29:11.772 21:44:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:11.772 21:44:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:11.772 21:44:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.029 21:44:34 -- keyring/file.sh@52 -- # [[ /tmp/tmp.qIzFNmEO3w == \/\t\m\p\/\t\m\p\.\q\I\z\F\N\m\E\O\3\w ]] 00:29:12.029 21:44:34 -- keyring/file.sh@53 -- # get_refcnt key0 00:29:12.029 21:44:34 -- keyring/common.sh@12 -- # get_key key0 00:29:12.029 21:44:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:12.029 21:44:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:12.029 21:44:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:12.029 21:44:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.287 21:44:34 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:12.287 21:44:34 -- keyring/file.sh@54 -- # get_refcnt key1 00:29:12.287 21:44:34 -- keyring/common.sh@12 -- # get_key key1 00:29:12.287 21:44:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:12.287 21:44:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:12.287 21:44:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.287 21:44:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:12.287 21:44:35 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:12.287 21:44:35 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:12.287 21:44:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:12.548 [2024-04-24 21:44:35.297850] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:12.548 nvme0n1 00:29:12.548 21:44:35 -- keyring/file.sh@59 -- # get_refcnt key0 00:29:12.548 21:44:35 -- keyring/common.sh@12 -- # get_key key0 00:29:12.548 21:44:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:12.548 21:44:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:12.548 21:44:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:12.548 21:44:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.808 21:44:35 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:12.808 21:44:35 -- keyring/file.sh@60 -- # get_refcnt key1 00:29:12.808 21:44:35 -- keyring/common.sh@12 -- # get_key key1 00:29:12.808 21:44:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:12.808 21:44:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:12.808 21:44:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.808 21:44:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:13.066 21:44:35 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:13.066 21:44:35 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:13.066 Running I/O for 1 seconds... 00:29:13.998 00:29:13.998 Latency(us) 00:29:13.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.998 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:13.998 nvme0n1 : 1.01 8814.48 34.43 0.00 0.00 14442.21 3381.66 20447.23 00:29:13.998 =================================================================================================================== 00:29:13.998 Total : 8814.48 34.43 0.00 0.00 14442.21 3381.66 20447.23 00:29:13.998 0 00:29:13.998 21:44:36 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:13.998 21:44:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:14.255 21:44:37 -- keyring/file.sh@65 -- # get_refcnt key0 00:29:14.255 21:44:37 -- keyring/common.sh@12 -- # get_key key0 00:29:14.255 21:44:37 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:14.255 21:44:37 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:14.255 21:44:37 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:14.255 21:44:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:14.513 21:44:37 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:14.513 21:44:37 -- keyring/file.sh@66 -- # get_refcnt key1 00:29:14.513 21:44:37 -- keyring/common.sh@12 -- # get_key key1 00:29:14.513 21:44:37 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:14.513 21:44:37 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:14.513 21:44:37 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:14.513 21:44:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:14.513 21:44:37 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:14.513 21:44:37 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:14.513 21:44:37 -- common/autotest_common.sh@638 -- # local es=0 00:29:14.513 21:44:37 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:14.513 21:44:37 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:14.513 21:44:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:14.513 21:44:37 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:14.513 21:44:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:14.513 21:44:37 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:14.513 21:44:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:14.771 [2024-04-24 21:44:37.550830] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:14.771 [2024-04-24 21:44:37.551501] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1874830 (107): Transport endpoint is not connected 00:29:14.771 [2024-04-24 21:44:37.552495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1874830 (9): Bad file descriptor 00:29:14.771 [2024-04-24 21:44:37.553495] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:14.771 [2024-04-24 21:44:37.553508] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:14.771 [2024-04-24 21:44:37.553518] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:14.771 request: 00:29:14.771 { 00:29:14.771 "name": "nvme0", 00:29:14.771 "trtype": "tcp", 00:29:14.771 "traddr": "127.0.0.1", 00:29:14.771 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:14.771 "adrfam": "ipv4", 00:29:14.771 "trsvcid": "4420", 00:29:14.771 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:14.771 "psk": "key1", 00:29:14.771 "method": "bdev_nvme_attach_controller", 00:29:14.771 "req_id": 1 00:29:14.771 } 00:29:14.771 Got JSON-RPC error response 00:29:14.771 response: 00:29:14.771 { 00:29:14.771 "code": -32602, 00:29:14.771 "message": "Invalid parameters" 00:29:14.771 } 00:29:14.771 21:44:37 -- common/autotest_common.sh@641 -- # es=1 00:29:14.771 21:44:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:14.771 21:44:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:14.771 21:44:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:14.771 21:44:37 -- keyring/file.sh@71 -- # get_refcnt key0 00:29:14.771 21:44:37 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:14.771 21:44:37 -- keyring/common.sh@12 -- # get_key key0 00:29:14.771 21:44:37 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:14.771 21:44:37 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:14.771 21:44:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:15.030 21:44:37 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:15.030 21:44:37 -- keyring/file.sh@72 -- # get_refcnt key1 00:29:15.030 21:44:37 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:15.030 21:44:37 -- keyring/common.sh@12 -- # get_key key1 00:29:15.030 21:44:37 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:15.030 21:44:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:15.030 21:44:37 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:15.030 21:44:37 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:15.030 21:44:37 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:15.030 21:44:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:15.288 21:44:38 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:15.288 21:44:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:15.545 21:44:38 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:15.545 21:44:38 -- keyring/file.sh@77 -- # jq length 00:29:15.545 21:44:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:15.803 21:44:38 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:15.803 21:44:38 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.WbklrMbw4P 00:29:15.803 21:44:38 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.WbklrMbw4P 00:29:15.803 21:44:38 -- common/autotest_common.sh@638 -- # local es=0 00:29:15.803 21:44:38 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.WbklrMbw4P 00:29:15.803 21:44:38 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:15.803 21:44:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:15.803 21:44:38 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:15.803 21:44:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:15.803 21:44:38 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WbklrMbw4P 00:29:15.803 21:44:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WbklrMbw4P 00:29:15.803 [2024-04-24 21:44:38.608497] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.WbklrMbw4P': 0100660 00:29:15.803 [2024-04-24 21:44:38.608525] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:15.803 request: 00:29:15.803 { 00:29:15.803 "name": "key0", 00:29:15.803 "path": "/tmp/tmp.WbklrMbw4P", 00:29:15.803 "method": "keyring_file_add_key", 00:29:15.803 "req_id": 1 00:29:15.803 } 00:29:15.803 Got JSON-RPC error response 00:29:15.804 response: 00:29:15.804 { 00:29:15.804 "code": -1, 00:29:15.804 "message": "Operation not permitted" 00:29:15.804 } 00:29:15.804 21:44:38 -- common/autotest_common.sh@641 -- # es=1 00:29:15.804 21:44:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:15.804 21:44:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:15.804 21:44:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:15.804 21:44:38 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.WbklrMbw4P 00:29:15.804 21:44:38 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WbklrMbw4P 00:29:15.804 21:44:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WbklrMbw4P 00:29:16.061 21:44:38 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.WbklrMbw4P 00:29:16.061 21:44:38 -- keyring/file.sh@88 -- # get_refcnt key0 00:29:16.061 21:44:38 -- keyring/common.sh@12 -- # get_key key0 00:29:16.061 21:44:38 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:16.061 21:44:38 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:16.061 21:44:38 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:16.061 21:44:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:16.319 21:44:38 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:16.319 21:44:38 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:16.319 21:44:38 -- common/autotest_common.sh@638 -- # local es=0 00:29:16.319 21:44:38 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:16.319 21:44:38 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:16.319 21:44:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:16.319 21:44:38 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:16.319 21:44:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:16.319 21:44:38 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:16.319 21:44:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:16.319 [2024-04-24 21:44:39.113799] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.WbklrMbw4P': No such file or directory 00:29:16.319 [2024-04-24 21:44:39.113824] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:16.319 [2024-04-24 21:44:39.113845] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:16.319 [2024-04-24 21:44:39.113853] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:16.319 [2024-04-24 21:44:39.113861] bdev_nvme.c:6204:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:16.319 request: 00:29:16.319 { 00:29:16.319 "name": "nvme0", 00:29:16.319 "trtype": "tcp", 00:29:16.319 "traddr": "127.0.0.1", 00:29:16.319 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:16.319 "adrfam": "ipv4", 00:29:16.319 "trsvcid": "4420", 00:29:16.319 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:16.319 "psk": "key0", 00:29:16.320 "method": "bdev_nvme_attach_controller", 00:29:16.320 "req_id": 1 00:29:16.320 } 00:29:16.320 Got JSON-RPC error response 00:29:16.320 response: 00:29:16.320 { 00:29:16.320 "code": -19, 00:29:16.320 "message": "No such device" 00:29:16.320 } 00:29:16.320 21:44:39 -- common/autotest_common.sh@641 -- # es=1 00:29:16.320 21:44:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:16.320 21:44:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:16.320 21:44:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:16.320 21:44:39 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:16.320 21:44:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:16.578 21:44:39 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:16.578 21:44:39 -- keyring/common.sh@15 -- # local name key digest path 00:29:16.578 21:44:39 -- keyring/common.sh@17 -- # name=key0 00:29:16.578 21:44:39 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:16.578 21:44:39 -- keyring/common.sh@17 -- # digest=0 00:29:16.578 21:44:39 -- keyring/common.sh@18 -- # mktemp 00:29:16.578 21:44:39 -- keyring/common.sh@18 -- # path=/tmp/tmp.HtoRhsLkiC 00:29:16.578 21:44:39 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:16.578 21:44:39 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:16.578 21:44:39 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:16.578 21:44:39 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:16.578 21:44:39 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:16.578 21:44:39 -- nvmf/common.sh@693 -- # digest=0 00:29:16.578 21:44:39 -- nvmf/common.sh@694 -- # python - 00:29:16.578 21:44:39 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HtoRhsLkiC 00:29:16.578 21:44:39 -- keyring/common.sh@23 -- # echo /tmp/tmp.HtoRhsLkiC 00:29:16.578 21:44:39 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.HtoRhsLkiC 00:29:16.578 21:44:39 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HtoRhsLkiC 00:29:16.578 21:44:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HtoRhsLkiC 00:29:16.835 21:44:39 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:16.835 21:44:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:16.835 nvme0n1 00:29:16.835 21:44:39 -- keyring/file.sh@99 -- # get_refcnt key0 00:29:16.835 21:44:39 -- keyring/common.sh@12 -- # get_key key0 00:29:16.835 21:44:39 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:16.835 21:44:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:16.835 21:44:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:16.835 21:44:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:17.093 21:44:39 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:17.093 21:44:39 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:17.093 21:44:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:17.350 21:44:40 -- keyring/file.sh@101 -- # get_key key0 00:29:17.350 21:44:40 -- keyring/file.sh@101 -- # jq -r .removed 00:29:17.350 21:44:40 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:17.350 21:44:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:17.350 21:44:40 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:17.608 21:44:40 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:17.608 21:44:40 -- keyring/file.sh@102 -- # get_refcnt key0 00:29:17.608 21:44:40 -- keyring/common.sh@12 -- # get_key key0 00:29:17.608 21:44:40 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:17.608 21:44:40 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:17.608 21:44:40 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:17.608 21:44:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:17.608 21:44:40 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:17.608 21:44:40 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:17.608 21:44:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:17.866 21:44:40 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:17.866 21:44:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:17.866 21:44:40 -- keyring/file.sh@104 -- # jq length 00:29:18.124 21:44:40 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:18.124 21:44:40 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HtoRhsLkiC 00:29:18.124 21:44:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HtoRhsLkiC 00:29:18.124 21:44:40 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.qIzFNmEO3w 00:29:18.124 21:44:40 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.qIzFNmEO3w 00:29:18.381 21:44:41 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:18.381 21:44:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:18.638 nvme0n1 00:29:18.638 21:44:41 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:18.638 21:44:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:18.896 21:44:41 -- keyring/file.sh@112 -- # config='{ 00:29:18.896 "subsystems": [ 00:29:18.896 { 00:29:18.896 "subsystem": "keyring", 00:29:18.896 "config": [ 00:29:18.896 { 00:29:18.896 "method": "keyring_file_add_key", 00:29:18.896 "params": { 00:29:18.896 "name": "key0", 00:29:18.896 "path": "/tmp/tmp.HtoRhsLkiC" 00:29:18.896 } 00:29:18.896 }, 00:29:18.896 { 00:29:18.896 "method": "keyring_file_add_key", 00:29:18.896 "params": { 00:29:18.896 "name": "key1", 00:29:18.896 "path": "/tmp/tmp.qIzFNmEO3w" 00:29:18.896 } 00:29:18.896 } 00:29:18.896 ] 00:29:18.896 }, 00:29:18.896 { 00:29:18.896 "subsystem": "iobuf", 00:29:18.896 "config": [ 00:29:18.896 { 00:29:18.896 "method": "iobuf_set_options", 00:29:18.896 "params": { 00:29:18.896 "small_pool_count": 8192, 00:29:18.896 "large_pool_count": 1024, 00:29:18.896 "small_bufsize": 8192, 00:29:18.896 "large_bufsize": 135168 00:29:18.896 } 00:29:18.896 } 00:29:18.896 ] 00:29:18.896 }, 00:29:18.896 { 00:29:18.896 "subsystem": "sock", 00:29:18.896 "config": [ 00:29:18.896 { 00:29:18.896 "method": "sock_impl_set_options", 00:29:18.896 "params": { 00:29:18.896 "impl_name": "posix", 00:29:18.896 "recv_buf_size": 2097152, 00:29:18.896 "send_buf_size": 2097152, 00:29:18.896 "enable_recv_pipe": true, 00:29:18.896 "enable_quickack": false, 00:29:18.896 "enable_placement_id": 0, 00:29:18.896 "enable_zerocopy_send_server": true, 00:29:18.896 "enable_zerocopy_send_client": false, 00:29:18.896 "zerocopy_threshold": 0, 00:29:18.896 "tls_version": 0, 00:29:18.896 "enable_ktls": false 00:29:18.896 } 00:29:18.896 }, 00:29:18.896 { 00:29:18.896 "method": "sock_impl_set_options", 00:29:18.896 "params": { 00:29:18.896 "impl_name": "ssl", 00:29:18.896 "recv_buf_size": 4096, 00:29:18.896 "send_buf_size": 4096, 00:29:18.896 "enable_recv_pipe": true, 00:29:18.896 "enable_quickack": false, 00:29:18.896 "enable_placement_id": 0, 00:29:18.896 "enable_zerocopy_send_server": true, 00:29:18.896 "enable_zerocopy_send_client": false, 00:29:18.896 "zerocopy_threshold": 0, 00:29:18.896 "tls_version": 0, 00:29:18.896 "enable_ktls": false 00:29:18.896 } 00:29:18.896 } 00:29:18.896 ] 00:29:18.896 }, 00:29:18.896 { 00:29:18.896 "subsystem": "vmd", 00:29:18.896 "config": [] 00:29:18.896 }, 00:29:18.896 { 00:29:18.896 "subsystem": "accel", 00:29:18.896 "config": [ 00:29:18.896 { 00:29:18.896 "method": "accel_set_options", 00:29:18.896 "params": { 00:29:18.896 "small_cache_size": 128, 00:29:18.896 "large_cache_size": 16, 00:29:18.896 "task_count": 2048, 00:29:18.896 "sequence_count": 2048, 00:29:18.896 "buf_count": 2048 00:29:18.896 } 00:29:18.896 } 00:29:18.896 ] 00:29:18.896 }, 00:29:18.896 { 00:29:18.896 "subsystem": "bdev", 00:29:18.896 "config": [ 00:29:18.896 { 00:29:18.896 "method": "bdev_set_options", 00:29:18.896 "params": { 00:29:18.896 "bdev_io_pool_size": 65535, 00:29:18.896 "bdev_io_cache_size": 256, 00:29:18.896 "bdev_auto_examine": true, 00:29:18.896 "iobuf_small_cache_size": 128, 00:29:18.896 "iobuf_large_cache_size": 16 00:29:18.896 } 00:29:18.896 }, 00:29:18.896 { 00:29:18.896 "method": "bdev_raid_set_options", 00:29:18.896 "params": { 00:29:18.896 "process_window_size_kb": 1024 00:29:18.896 } 00:29:18.896 }, 00:29:18.896 { 00:29:18.896 "method": "bdev_iscsi_set_options", 00:29:18.896 "params": { 00:29:18.896 "timeout_sec": 30 00:29:18.896 } 00:29:18.896 }, 00:29:18.896 { 00:29:18.896 "method": "bdev_nvme_set_options", 00:29:18.896 "params": { 00:29:18.896 "action_on_timeout": "none", 00:29:18.896 "timeout_us": 0, 00:29:18.896 "timeout_admin_us": 0, 00:29:18.896 "keep_alive_timeout_ms": 10000, 00:29:18.896 "arbitration_burst": 0, 00:29:18.896 "low_priority_weight": 0, 00:29:18.896 "medium_priority_weight": 0, 00:29:18.896 "high_priority_weight": 0, 00:29:18.896 "nvme_adminq_poll_period_us": 10000, 00:29:18.896 "nvme_ioq_poll_period_us": 0, 00:29:18.896 "io_queue_requests": 512, 00:29:18.896 "delay_cmd_submit": true, 00:29:18.896 "transport_retry_count": 4, 00:29:18.896 "bdev_retry_count": 3, 00:29:18.896 "transport_ack_timeout": 0, 00:29:18.896 "ctrlr_loss_timeout_sec": 0, 00:29:18.896 "reconnect_delay_sec": 0, 00:29:18.896 "fast_io_fail_timeout_sec": 0, 00:29:18.896 "disable_auto_failback": false, 00:29:18.896 "generate_uuids": false, 00:29:18.896 "transport_tos": 0, 00:29:18.896 "nvme_error_stat": false, 00:29:18.896 "rdma_srq_size": 0, 00:29:18.896 "io_path_stat": false, 00:29:18.896 "allow_accel_sequence": false, 00:29:18.896 "rdma_max_cq_size": 0, 00:29:18.896 "rdma_cm_event_timeout_ms": 0, 00:29:18.896 "dhchap_digests": [ 00:29:18.896 "sha256", 00:29:18.896 "sha384", 00:29:18.896 "sha512" 00:29:18.896 ], 00:29:18.896 "dhchap_dhgroups": [ 00:29:18.896 "null", 00:29:18.896 "ffdhe2048", 00:29:18.896 "ffdhe3072", 00:29:18.896 "ffdhe4096", 00:29:18.896 "ffdhe6144", 00:29:18.896 "ffdhe8192" 00:29:18.896 ] 00:29:18.896 } 00:29:18.896 }, 00:29:18.896 { 00:29:18.896 "method": "bdev_nvme_attach_controller", 00:29:18.896 "params": { 00:29:18.896 "name": "nvme0", 00:29:18.896 "trtype": "TCP", 00:29:18.896 "adrfam": "IPv4", 00:29:18.896 "traddr": "127.0.0.1", 00:29:18.896 "trsvcid": "4420", 00:29:18.896 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:18.896 "prchk_reftag": false, 00:29:18.896 "prchk_guard": false, 00:29:18.896 "ctrlr_loss_timeout_sec": 0, 00:29:18.896 "reconnect_delay_sec": 0, 00:29:18.896 "fast_io_fail_timeout_sec": 0, 00:29:18.896 "psk": "key0", 00:29:18.896 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:18.896 "hdgst": false, 00:29:18.896 "ddgst": false 00:29:18.896 } 00:29:18.896 }, 00:29:18.896 { 00:29:18.896 "method": "bdev_nvme_set_hotplug", 00:29:18.896 "params": { 00:29:18.896 "period_us": 100000, 00:29:18.896 "enable": false 00:29:18.896 } 00:29:18.896 }, 00:29:18.896 { 00:29:18.896 "method": "bdev_wait_for_examine" 00:29:18.896 } 00:29:18.896 ] 00:29:18.896 }, 00:29:18.896 { 00:29:18.896 "subsystem": "nbd", 00:29:18.896 "config": [] 00:29:18.896 } 00:29:18.896 ] 00:29:18.896 }' 00:29:18.896 21:44:41 -- keyring/file.sh@114 -- # killprocess 3046900 00:29:18.896 21:44:41 -- common/autotest_common.sh@936 -- # '[' -z 3046900 ']' 00:29:18.896 21:44:41 -- common/autotest_common.sh@940 -- # kill -0 3046900 00:29:18.896 21:44:41 -- common/autotest_common.sh@941 -- # uname 00:29:18.897 21:44:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:18.897 21:44:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3046900 00:29:18.897 21:44:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:18.897 21:44:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:18.897 21:44:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3046900' 00:29:18.897 killing process with pid 3046900 00:29:18.897 21:44:41 -- common/autotest_common.sh@955 -- # kill 3046900 00:29:18.897 Received shutdown signal, test time was about 1.000000 seconds 00:29:18.897 00:29:18.897 Latency(us) 00:29:18.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.897 =================================================================================================================== 00:29:18.897 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:18.897 21:44:41 -- common/autotest_common.sh@960 -- # wait 3046900 00:29:19.155 21:44:41 -- keyring/file.sh@117 -- # bperfpid=3048585 00:29:19.155 21:44:41 -- keyring/file.sh@119 -- # waitforlisten 3048585 /var/tmp/bperf.sock 00:29:19.155 21:44:41 -- common/autotest_common.sh@817 -- # '[' -z 3048585 ']' 00:29:19.155 21:44:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:19.155 21:44:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:19.155 21:44:41 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:19.155 21:44:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:19.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:19.155 21:44:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:19.155 21:44:41 -- keyring/file.sh@115 -- # echo '{ 00:29:19.155 "subsystems": [ 00:29:19.155 { 00:29:19.155 "subsystem": "keyring", 00:29:19.155 "config": [ 00:29:19.155 { 00:29:19.155 "method": "keyring_file_add_key", 00:29:19.155 "params": { 00:29:19.155 "name": "key0", 00:29:19.155 "path": "/tmp/tmp.HtoRhsLkiC" 00:29:19.155 } 00:29:19.155 }, 00:29:19.155 { 00:29:19.155 "method": "keyring_file_add_key", 00:29:19.155 "params": { 00:29:19.155 "name": "key1", 00:29:19.155 "path": "/tmp/tmp.qIzFNmEO3w" 00:29:19.155 } 00:29:19.155 } 00:29:19.155 ] 00:29:19.155 }, 00:29:19.155 { 00:29:19.155 "subsystem": "iobuf", 00:29:19.155 "config": [ 00:29:19.155 { 00:29:19.155 "method": "iobuf_set_options", 00:29:19.155 "params": { 00:29:19.155 "small_pool_count": 8192, 00:29:19.155 "large_pool_count": 1024, 00:29:19.155 "small_bufsize": 8192, 00:29:19.155 "large_bufsize": 135168 00:29:19.155 } 00:29:19.155 } 00:29:19.155 ] 00:29:19.155 }, 00:29:19.155 { 00:29:19.155 "subsystem": "sock", 00:29:19.155 "config": [ 00:29:19.155 { 00:29:19.155 "method": "sock_impl_set_options", 00:29:19.155 "params": { 00:29:19.155 "impl_name": "posix", 00:29:19.155 "recv_buf_size": 2097152, 00:29:19.155 "send_buf_size": 2097152, 00:29:19.155 "enable_recv_pipe": true, 00:29:19.155 "enable_quickack": false, 00:29:19.155 "enable_placement_id": 0, 00:29:19.155 "enable_zerocopy_send_server": true, 00:29:19.155 "enable_zerocopy_send_client": false, 00:29:19.155 "zerocopy_threshold": 0, 00:29:19.155 "tls_version": 0, 00:29:19.155 "enable_ktls": false 00:29:19.155 } 00:29:19.155 }, 00:29:19.155 { 00:29:19.155 "method": "sock_impl_set_options", 00:29:19.155 "params": { 00:29:19.155 "impl_name": "ssl", 00:29:19.155 "recv_buf_size": 4096, 00:29:19.155 "send_buf_size": 4096, 00:29:19.155 "enable_recv_pipe": true, 00:29:19.155 "enable_quickack": false, 00:29:19.155 "enable_placement_id": 0, 00:29:19.155 "enable_zerocopy_send_server": true, 00:29:19.155 "enable_zerocopy_send_client": false, 00:29:19.155 "zerocopy_threshold": 0, 00:29:19.155 "tls_version": 0, 00:29:19.155 "enable_ktls": false 00:29:19.155 } 00:29:19.155 } 00:29:19.155 ] 00:29:19.155 }, 00:29:19.155 { 00:29:19.155 "subsystem": "vmd", 00:29:19.155 "config": [] 00:29:19.155 }, 00:29:19.155 { 00:29:19.155 "subsystem": "accel", 00:29:19.155 "config": [ 00:29:19.155 { 00:29:19.155 "method": "accel_set_options", 00:29:19.155 "params": { 00:29:19.155 "small_cache_size": 128, 00:29:19.155 "large_cache_size": 16, 00:29:19.155 "task_count": 2048, 00:29:19.155 "sequence_count": 2048, 00:29:19.155 "buf_count": 2048 00:29:19.155 } 00:29:19.155 } 00:29:19.155 ] 00:29:19.155 }, 00:29:19.155 { 00:29:19.155 "subsystem": "bdev", 00:29:19.155 "config": [ 00:29:19.155 { 00:29:19.155 "method": "bdev_set_options", 00:29:19.155 "params": { 00:29:19.155 "bdev_io_pool_size": 65535, 00:29:19.155 "bdev_io_cache_size": 256, 00:29:19.155 "bdev_auto_examine": true, 00:29:19.155 "iobuf_small_cache_size": 128, 00:29:19.155 "iobuf_large_cache_size": 16 00:29:19.155 } 00:29:19.155 }, 00:29:19.155 { 00:29:19.155 "method": "bdev_raid_set_options", 00:29:19.155 "params": { 00:29:19.155 "process_window_size_kb": 1024 00:29:19.155 } 00:29:19.155 }, 00:29:19.155 { 00:29:19.155 "method": "bdev_iscsi_set_options", 00:29:19.155 "params": { 00:29:19.155 "timeout_sec": 30 00:29:19.155 } 00:29:19.155 }, 00:29:19.155 { 00:29:19.155 "method": "bdev_nvme_set_options", 00:29:19.155 "params": { 00:29:19.155 "action_on_timeout": "none", 00:29:19.155 "timeout_us": 0, 00:29:19.155 "timeout_admin_us": 0, 00:29:19.155 "keep_alive_timeout_ms": 10000, 00:29:19.155 "arbitration_burst": 0, 00:29:19.155 "low_priority_weight": 0, 00:29:19.155 "medium_priority_weight": 0, 00:29:19.155 "high_priority_weight": 0, 00:29:19.155 "nvme_adminq_poll_period_us": 10000, 00:29:19.155 "nvme_ioq_poll_period_us": 0, 00:29:19.155 "io_queue_requests": 512, 00:29:19.155 "delay_cmd_submit": true, 00:29:19.155 "transport_retry_count": 4, 00:29:19.155 "bdev_retry_count": 3, 00:29:19.155 "transport_ack_timeout": 0, 00:29:19.155 "ctrlr_loss_timeout_sec": 0, 00:29:19.155 "reconnect_delay_sec": 0, 00:29:19.155 "fast_io_fail_timeout_sec": 0, 00:29:19.155 "disable_auto_failback": false, 00:29:19.155 "generate_uuids": false, 00:29:19.155 "transport_tos": 0, 00:29:19.155 "nvme_error_stat": false, 00:29:19.155 "rdma_srq_size": 0, 00:29:19.155 "io_path_stat": false, 00:29:19.155 "allow_accel_sequence": false, 00:29:19.155 "rdma_max_cq_size": 0, 00:29:19.155 "rdma_cm_event_timeout_ms": 0, 00:29:19.155 "dhchap_digests": [ 00:29:19.155 "sha256", 00:29:19.155 "sha384", 00:29:19.155 "sha512" 00:29:19.155 ], 00:29:19.155 "dhchap_dhgroups": [ 00:29:19.155 "null", 00:29:19.156 "ffdhe2048", 00:29:19.156 "ffdhe3072", 00:29:19.156 "ffdhe4096", 00:29:19.156 "ffdhe6144", 00:29:19.156 "ffdhe8192" 00:29:19.156 ] 00:29:19.156 } 00:29:19.156 }, 00:29:19.156 { 00:29:19.156 "method": "bdev_nvme_attach_controller", 00:29:19.156 "params": { 00:29:19.156 "name": "nvme0", 00:29:19.156 "trtype": "TCP", 00:29:19.156 "adrfam": "IPv4", 00:29:19.156 "traddr": "127.0.0.1", 00:29:19.156 "trsvcid": "4420", 00:29:19.156 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:19.156 "prchk_reftag": false, 00:29:19.156 "prchk_guard": false, 00:29:19.156 "ctrlr_loss_timeout_sec": 0, 00:29:19.156 "reconnect_delay_sec": 0, 00:29:19.156 "fast_io_fail_timeout_sec": 0, 00:29:19.156 "psk": "key0", 00:29:19.156 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:19.156 "hdgst": false, 00:29:19.156 "ddgst": false 00:29:19.156 } 00:29:19.156 }, 00:29:19.156 { 00:29:19.156 "method": "bdev_nvme_set_hotplug", 00:29:19.156 "params": { 00:29:19.156 "period_us": 100000, 00:29:19.156 "enable": false 00:29:19.156 } 00:29:19.156 }, 00:29:19.156 { 00:29:19.156 "method": "bdev_wait_for_examine" 00:29:19.156 } 00:29:19.156 ] 00:29:19.156 }, 00:29:19.156 { 00:29:19.156 "subsystem": "nbd", 00:29:19.156 "config": [] 00:29:19.156 } 00:29:19.156 ] 00:29:19.156 }' 00:29:19.156 21:44:41 -- common/autotest_common.sh@10 -- # set +x 00:29:19.156 [2024-04-24 21:44:41.929268] Starting SPDK v24.05-pre git sha1 7aadd6759 / DPDK 23.11.0 initialization... 00:29:19.156 [2024-04-24 21:44:41.929320] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3048585 ] 00:29:19.156 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.156 [2024-04-24 21:44:41.997604] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.414 [2024-04-24 21:44:42.066230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.414 [2024-04-24 21:44:42.215950] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:19.979 21:44:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:19.979 21:44:42 -- common/autotest_common.sh@850 -- # return 0 00:29:19.979 21:44:42 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:19.979 21:44:42 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:19.979 21:44:42 -- keyring/file.sh@120 -- # jq length 00:29:20.237 21:44:42 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:20.237 21:44:42 -- keyring/file.sh@121 -- # get_refcnt key0 00:29:20.237 21:44:42 -- keyring/common.sh@12 -- # get_key key0 00:29:20.237 21:44:42 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:20.237 21:44:42 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:20.237 21:44:42 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:20.237 21:44:42 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:20.237 21:44:43 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:20.237 21:44:43 -- keyring/file.sh@122 -- # get_refcnt key1 00:29:20.237 21:44:43 -- keyring/common.sh@12 -- # get_key key1 00:29:20.237 21:44:43 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:20.237 21:44:43 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:20.237 21:44:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:20.237 21:44:43 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:20.495 21:44:43 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:20.495 21:44:43 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:20.495 21:44:43 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:20.495 21:44:43 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:20.753 21:44:43 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:20.753 21:44:43 -- keyring/file.sh@1 -- # cleanup 00:29:20.753 21:44:43 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.HtoRhsLkiC /tmp/tmp.qIzFNmEO3w 00:29:20.753 21:44:43 -- keyring/file.sh@20 -- # killprocess 3048585 00:29:20.753 21:44:43 -- common/autotest_common.sh@936 -- # '[' -z 3048585 ']' 00:29:20.753 21:44:43 -- common/autotest_common.sh@940 -- # kill -0 3048585 00:29:20.753 21:44:43 -- common/autotest_common.sh@941 -- # uname 00:29:20.753 21:44:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:20.753 21:44:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3048585 00:29:20.753 21:44:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:20.753 21:44:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:20.753 21:44:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3048585' 00:29:20.753 killing process with pid 3048585 00:29:20.753 21:44:43 -- common/autotest_common.sh@955 -- # kill 3048585 00:29:20.753 Received shutdown signal, test time was about 1.000000 seconds 00:29:20.753 00:29:20.753 Latency(us) 00:29:20.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.753 =================================================================================================================== 00:29:20.753 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:20.753 21:44:43 -- common/autotest_common.sh@960 -- # wait 3048585 00:29:21.011 21:44:43 -- keyring/file.sh@21 -- # killprocess 3046842 00:29:21.011 21:44:43 -- common/autotest_common.sh@936 -- # '[' -z 3046842 ']' 00:29:21.011 21:44:43 -- common/autotest_common.sh@940 -- # kill -0 3046842 00:29:21.011 21:44:43 -- common/autotest_common.sh@941 -- # uname 00:29:21.011 21:44:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:21.011 21:44:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3046842 00:29:21.011 21:44:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:21.011 21:44:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:21.011 21:44:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3046842' 00:29:21.011 killing process with pid 3046842 00:29:21.011 21:44:43 -- common/autotest_common.sh@955 -- # kill 3046842 00:29:21.011 [2024-04-24 21:44:43.739307] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:21.011 21:44:43 -- common/autotest_common.sh@960 -- # wait 3046842 00:29:21.269 00:29:21.269 real 0m12.016s 00:29:21.269 user 0m27.255s 00:29:21.269 sys 0m3.387s 00:29:21.269 21:44:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:21.269 21:44:44 -- common/autotest_common.sh@10 -- # set +x 00:29:21.269 ************************************ 00:29:21.269 END TEST keyring_file 00:29:21.269 ************************************ 00:29:21.269 21:44:44 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:29:21.269 21:44:44 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:21.269 21:44:44 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:29:21.269 21:44:44 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:29:21.269 21:44:44 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:21.270 21:44:44 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:29:21.270 21:44:44 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:21.270 21:44:44 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:21.270 21:44:44 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:29:21.270 21:44:44 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:29:21.270 21:44:44 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:21.270 21:44:44 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:29:21.270 21:44:44 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:21.270 21:44:44 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:29:21.270 21:44:44 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:29:21.270 21:44:44 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:29:21.270 21:44:44 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:29:21.270 21:44:44 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:29:21.270 21:44:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:21.270 21:44:44 -- common/autotest_common.sh@10 -- # set +x 00:29:21.270 21:44:44 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:29:21.270 21:44:44 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:29:21.270 21:44:44 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:29:21.270 21:44:44 -- common/autotest_common.sh@10 -- # set +x 00:29:29.380 INFO: APP EXITING 00:29:29.380 INFO: killing all VMs 00:29:29.380 INFO: killing vhost app 00:29:29.380 INFO: EXIT DONE 00:29:31.277 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:29:31.277 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:29:31.277 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:29:31.277 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:29:31.277 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:29:31.277 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:29:31.277 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:29:31.534 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:29:31.534 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:29:31.535 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:29:31.535 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:29:31.535 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:29:31.535 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:29:31.535 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:29:31.535 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:29:31.535 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:29:31.791 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:29:35.071 Cleaning 00:29:35.071 Removing: /var/run/dpdk/spdk0/config 00:29:35.071 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:35.071 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:35.071 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:35.071 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:35.071 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:35.071 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:35.071 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:35.071 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:35.071 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:35.071 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:35.071 Removing: /var/run/dpdk/spdk1/config 00:29:35.071 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:35.071 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:35.071 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:35.071 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:35.071 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:35.071 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:35.071 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:35.071 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:35.071 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:35.071 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:35.071 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:35.071 Removing: /var/run/dpdk/spdk2/config 00:29:35.071 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:35.071 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:35.071 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:35.071 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:35.071 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:35.071 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:35.071 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:35.071 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:35.071 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:35.071 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:35.071 Removing: /var/run/dpdk/spdk3/config 00:29:35.071 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:35.071 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:35.071 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:35.071 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:35.071 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:35.071 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:35.071 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:35.071 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:35.071 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:35.071 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:35.071 Removing: /var/run/dpdk/spdk4/config 00:29:35.071 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:35.071 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:35.071 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:35.071 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:35.071 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:35.071 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:35.071 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:35.071 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:35.071 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:35.071 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:35.071 Removing: /dev/shm/bdev_svc_trace.1 00:29:35.071 Removing: /dev/shm/nvmf_trace.0 00:29:35.071 Removing: /dev/shm/spdk_tgt_trace.pid2671763 00:29:35.071 Removing: /var/run/dpdk/spdk0 00:29:35.071 Removing: /var/run/dpdk/spdk1 00:29:35.071 Removing: /var/run/dpdk/spdk2 00:29:35.071 Removing: /var/run/dpdk/spdk3 00:29:35.071 Removing: /var/run/dpdk/spdk4 00:29:35.071 Removing: /var/run/dpdk/spdk_pid2669010 00:29:35.071 Removing: /var/run/dpdk/spdk_pid2670291 00:29:35.071 Removing: /var/run/dpdk/spdk_pid2671763 00:29:35.071 Removing: /var/run/dpdk/spdk_pid2672536 00:29:35.071 Removing: /var/run/dpdk/spdk_pid2673486 00:29:35.071 Removing: /var/run/dpdk/spdk_pid2673680 00:29:35.071 Removing: /var/run/dpdk/spdk_pid2674789 00:29:35.071 Removing: /var/run/dpdk/spdk_pid2675054 00:29:35.071 Removing: /var/run/dpdk/spdk_pid2675412 00:29:35.071 Removing: /var/run/dpdk/spdk_pid2677012 00:29:35.071 Removing: /var/run/dpdk/spdk_pid2678499 00:29:35.071 Removing: /var/run/dpdk/spdk_pid2678881 00:29:35.071 Removing: /var/run/dpdk/spdk_pid2679284 00:29:35.071 Removing: /var/run/dpdk/spdk_pid2679621 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2679972 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2680268 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2680556 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2680880 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2682020 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2685310 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2685875 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2686529 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2686618 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2687200 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2687463 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2687984 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2688042 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2688413 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2688617 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2688918 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2688939 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2689579 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2689866 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2690205 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2690521 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2690602 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2690895 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2691186 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2691480 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2691776 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2692066 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2692361 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2692648 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2692943 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2693236 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2693530 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2693823 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2694114 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2694401 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2694702 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2694989 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2695284 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2695575 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2695870 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2696165 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2696463 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2696757 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2697073 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2697440 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2701551 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2749767 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2754476 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2763859 00:29:35.329 Removing: /var/run/dpdk/spdk_pid2769489 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2774122 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2774809 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2787721 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2787786 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2788698 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2789499 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2790506 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2791094 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2791096 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2791368 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2791392 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2791518 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2792437 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2793239 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2794295 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2794831 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2794839 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2795108 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2796350 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2797511 00:29:35.586 Removing: /var/run/dpdk/spdk_pid2806176 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2806636 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2811234 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2817377 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2820132 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2831618 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2841021 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2842838 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2843900 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2861681 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2865812 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2870509 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2872463 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2874763 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2875039 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2875307 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2875449 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2876161 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2878032 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2879089 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2879606 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2881897 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2882539 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2883308 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2887618 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2898121 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2902472 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2908910 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2910269 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2912017 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2917171 00:29:35.587 Removing: /var/run/dpdk/spdk_pid2921486 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2929465 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2929471 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2934287 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2934552 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2934819 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2935326 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2935349 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2939933 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2940550 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2945341 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2948115 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2953969 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2959744 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2967841 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2967843 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2987079 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2987839 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2988388 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2989192 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2990056 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2990646 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2991406 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2991958 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2996505 00:29:35.844 Removing: /var/run/dpdk/spdk_pid2996775 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3003164 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3003470 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3005757 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3014619 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3014624 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3020192 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3022271 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3024458 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3025566 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3027657 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3028887 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3038127 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3038658 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3039182 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3041655 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3042193 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3042724 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3046842 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3046900 00:29:35.844 Removing: /var/run/dpdk/spdk_pid3048585 00:29:35.844 Clean 00:29:36.101 21:44:58 -- common/autotest_common.sh@1437 -- # return 0 00:29:36.101 21:44:58 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:29:36.101 21:44:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:36.101 21:44:58 -- common/autotest_common.sh@10 -- # set +x 00:29:36.101 21:44:58 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:29:36.101 21:44:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:36.101 21:44:58 -- common/autotest_common.sh@10 -- # set +x 00:29:36.358 21:44:59 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:36.358 21:44:59 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:36.359 21:44:59 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:36.359 21:44:59 -- spdk/autotest.sh@389 -- # hash lcov 00:29:36.359 21:44:59 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:36.359 21:44:59 -- spdk/autotest.sh@391 -- # hostname 00:29:36.359 21:44:59 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-22 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:36.359 geninfo: WARNING: invalid characters removed from testname! 00:29:58.293 21:45:19 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:58.860 21:45:21 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:00.762 21:45:23 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:02.136 21:45:24 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:04.036 21:45:26 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:05.408 21:45:28 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:07.313 21:45:29 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:07.313 21:45:29 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.313 21:45:29 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:07.313 21:45:29 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.313 21:45:29 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.313 21:45:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.313 21:45:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.313 21:45:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.313 21:45:29 -- paths/export.sh@5 -- $ export PATH 00:30:07.313 21:45:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.313 21:45:29 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:07.313 21:45:29 -- common/autobuild_common.sh@435 -- $ date +%s 00:30:07.313 21:45:29 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713987929.XXXXXX 00:30:07.313 21:45:29 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713987929.FCI4Sv 00:30:07.313 21:45:29 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:30:07.313 21:45:29 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:30:07.313 21:45:29 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:07.313 21:45:29 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:07.313 21:45:29 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:07.313 21:45:29 -- common/autobuild_common.sh@451 -- $ get_config_params 00:30:07.313 21:45:29 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:30:07.313 21:45:29 -- common/autotest_common.sh@10 -- $ set +x 00:30:07.313 21:45:29 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:07.313 21:45:29 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:30:07.313 21:45:29 -- pm/common@17 -- $ local monitor 00:30:07.313 21:45:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:07.313 21:45:29 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3062334 00:30:07.313 21:45:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:07.313 21:45:29 -- pm/common@21 -- $ date +%s 00:30:07.313 21:45:29 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3062336 00:30:07.313 21:45:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:07.313 21:45:29 -- pm/common@21 -- $ date +%s 00:30:07.313 21:45:29 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3062339 00:30:07.313 21:45:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:07.313 21:45:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713987929 00:30:07.313 21:45:29 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3062342 00:30:07.313 21:45:29 -- pm/common@21 -- $ date +%s 00:30:07.313 21:45:29 -- pm/common@26 -- $ sleep 1 00:30:07.313 21:45:29 -- pm/common@21 -- $ date +%s 00:30:07.313 21:45:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713987929 00:30:07.313 21:45:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713987929 00:30:07.313 21:45:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713987929 00:30:07.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713987929_collect-cpu-load.pm.log 00:30:07.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713987929_collect-vmstat.pm.log 00:30:07.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713987929_collect-bmc-pm.bmc.pm.log 00:30:07.313 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713987929_collect-cpu-temp.pm.log 00:30:08.264 21:45:30 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:30:08.264 21:45:30 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:30:08.264 21:45:30 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:08.264 21:45:30 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:08.264 21:45:30 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:08.264 21:45:30 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:08.264 21:45:30 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:08.264 21:45:30 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:08.264 21:45:30 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:08.264 21:45:30 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:08.264 21:45:30 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:08.264 21:45:30 -- pm/common@30 -- $ signal_monitor_resources TERM 00:30:08.264 21:45:30 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:30:08.264 21:45:30 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:08.264 21:45:30 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:08.264 21:45:30 -- pm/common@45 -- $ pid=3062347 00:30:08.264 21:45:30 -- pm/common@52 -- $ sudo kill -TERM 3062347 00:30:08.264 21:45:30 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:08.264 21:45:30 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:08.264 21:45:30 -- pm/common@45 -- $ pid=3062350 00:30:08.264 21:45:30 -- pm/common@52 -- $ sudo kill -TERM 3062350 00:30:08.264 21:45:31 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:08.264 21:45:31 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:08.264 21:45:31 -- pm/common@45 -- $ pid=3062354 00:30:08.264 21:45:31 -- pm/common@52 -- $ sudo kill -TERM 3062354 00:30:08.264 21:45:31 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:08.264 21:45:31 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:08.264 21:45:31 -- pm/common@45 -- $ pid=3062353 00:30:08.264 21:45:31 -- pm/common@52 -- $ sudo kill -TERM 3062353 00:30:08.264 + [[ -n 2560302 ]] 00:30:08.264 + sudo kill 2560302 00:30:08.274 [Pipeline] } 00:30:08.291 [Pipeline] // stage 00:30:08.297 [Pipeline] } 00:30:08.313 [Pipeline] // timeout 00:30:08.319 [Pipeline] } 00:30:08.335 [Pipeline] // catchError 00:30:08.340 [Pipeline] } 00:30:08.357 [Pipeline] // wrap 00:30:08.364 [Pipeline] } 00:30:08.379 [Pipeline] // catchError 00:30:08.389 [Pipeline] stage 00:30:08.391 [Pipeline] { (Epilogue) 00:30:08.406 [Pipeline] catchError 00:30:08.408 [Pipeline] { 00:30:08.422 [Pipeline] echo 00:30:08.424 Cleanup processes 00:30:08.429 [Pipeline] sh 00:30:08.712 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:08.712 3062437 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:08.712 3062802 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:08.726 [Pipeline] sh 00:30:09.008 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:09.008 ++ grep -v 'sudo pgrep' 00:30:09.008 ++ awk '{print $1}' 00:30:09.008 + sudo kill -9 3062437 00:30:09.019 [Pipeline] sh 00:30:09.294 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:09.294 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:30:13.478 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:30:17.679 [Pipeline] sh 00:30:17.991 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:17.991 Artifacts sizes are good 00:30:18.005 [Pipeline] archiveArtifacts 00:30:18.012 Archiving artifacts 00:30:18.159 [Pipeline] sh 00:30:18.442 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:18.456 [Pipeline] cleanWs 00:30:18.466 [WS-CLEANUP] Deleting project workspace... 00:30:18.466 [WS-CLEANUP] Deferred wipeout is used... 00:30:18.473 [WS-CLEANUP] done 00:30:18.474 [Pipeline] } 00:30:18.495 [Pipeline] // catchError 00:30:18.507 [Pipeline] sh 00:30:18.784 + logger -p user.info -t JENKINS-CI 00:30:18.793 [Pipeline] } 00:30:18.808 [Pipeline] // stage 00:30:18.814 [Pipeline] } 00:30:18.832 [Pipeline] // node 00:30:18.839 [Pipeline] End of Pipeline 00:30:18.878 Finished: SUCCESS